Gregory Abowd, Northeastern University
📅 November 16, 2021 (Tuesday), 4pm - 5pm
📍 TBD, In Person
TBD
Bio:TBD
The HCI seminar meets every other week on the first and third Tuesday of each month.
📅 Tuesdays, 4pm-5pm
📫 Subscribe
✉️ Contact
📅 Calendar
Gregory Abowd, Northeastern University
📅 November 16, 2021 (Tuesday), 4pm - 5pm
📍 TBD, In Person
TBD
Bio:TBD
Michael Coblenz, University of Maryland
📅 November 9, 2021 (Tuesday), 4pm - 5pm
📍 TBD, In Person
TBD
Bio:TBD
Krzysztof Gajos, Harvard University
📅 November 2, 2021 (Tuesday), 4pm - 5pm
📍 TBD
TBD
Bio:TBD
Hae Won Park, MIT Media Lab
📅 May 07, 2018 (Monday), 11am - 12pm
📍 TBD, In Person
TBD
Bio:TBD
Evan Peck, Bucknell University
📅 September 28, 2021 (Tuesday), 4pm - 5pm
📍 CSAIL Star Room, Stata Center (32-D463)
TBD
Bio:TBD
Martin Wattenberg, Harvard University
📅 September 14, 2021 (Tuesday), 4pm - 5pm
📍 Zoom
Artificial intelligence isn’t a single technology—it’s become a broad field, with applications to almost every area of life. As a result, we can’t view it with just a single lens. Instead, we should use every tool at our disposal. Yes, math and engineering are important, but design and even art are critical as well. Through a series of examples I will discuss multiple ways of knowing and understanding how AI works, and how using these different lenses together can broaden participation in the field of AI.
Bio:
Martin Wattenberg is Gordon McKay Professor of Computer Science at Harvard. He also has a part time role at Google Research, where he co-founded the People + AI Research initiative. His work at Google, with long-time collaborator Fernanda Viégas, focuses on making AI technology broadly accessible and reflective of human values. Their team has also created end-user visualizations for products such as Search, YouTube, and Google Analytics.
Wattenberg’s experience also includes co-founding a visualization studio (Flowing Media, Inc.), as well as IBM’s Visual Communication Lab, which created the ground-breaking public visualization platform Many Eyes. Wattenberg came to IBM from Dow Jones, where he was the Director of Research and Development at SmartMoney.com. His work there included some of the earliest pieces of interactive journalism.
Wattenberg has a Ph.D. in mathematics from U.C. Berkeley, focusing on dynamical systems. His visualization-based artwork has been exhibited in museums around the world.
Raf Ramakers, Hasselt University
📅 May 18, 2021 (Tuesday), 1pm - 2pm
📍 Zoom
With the advent of the industrial revolution in the early 19th century came the need for measurement and precision as parts had to fit together and mass-produced according to tolerances. This quest for precision in industry resulted in new fields i.e. metrology and numerous advanced machines and instruments. Over the past decade, several of these technologies, including digital fabrication machines, have become widely available and accessible to makers, DIY enthusiasts, researchers and educators in various domains. While the wide-spread availability of digital fabrication tools empowers many people outside the field of engineering to experiment with new ideas and prototype physical artifacts themselves, these novel technologies also expose users to very precise specifications, used in industry, for which digital fabrication machines and supporting software tools were originally developed. Measurements and precise specifications are however a major source of errors in fabrication workflows. In this talk, I therefore consider research in fabrication, prototyping, and craft from the perspective of measurement and in particular their user aspects. I also present several novel fabrication techniques that demonstrate how one can fabricate without explicit measurements.
Bio:Raf Ramakers is an assistant professor in computer science at Hasselt University. His research in Human Computer Interaction takes on an engineering perspective and focuses on digital fabrication and physical computing. More specifically, in his research he builds and investigates novel hardware and software tools to facilitate prototyping physical artifacts and sensor systems. He enjoys working across different disciplines. This resulted in ACM UIST and CHI publications that cross the boundaries of human-computer interaction, electronic engineering, product design, and material science. Raf also regularly serves in program committees of these top-tier venues. Raf received his PhD in computer science in 2016 from Hasselt University. His PhD thesis was awarded both the FWO – IBM innovation award and the FWO-Nokia Bell Scientific Prize. These awards are given annually to the best Belgian PhD dissertation in respectively computer science and communication technology.
Anne Marie Piper, UC Irvine
📅 April 27, 2021 (Tuesday), 1pm - 2pm
📍 Zoom
Approximately 61 million Americans, or one in four U.S. adults, have a disability that affects daily life. Despite the prevalence of disability across the lifespan, accessibility is typically an afterthought in technology design. Discussions of accessibility often center on checklists of requirements and whether or not a system has particular features. In this talk, I will argue for a view of accessibility that is collaboratively negotiated, situated, and enacted through sociomaterial relations. Grounded in extensive field work, I will present several cases of design for accessibility that shift how we think about building systems with and for individuals with disabilities. These projects detail new systems for collaborative meaning-making in the context of dementia, online social advocacy among blind and visually impaired older adults, and ability-diverse group work and design. Collectively, these projects reveal the interactive nature of accessibility that is often missing in individualistic system design and call attention to the importance of the social and political dimensions of accessibility alongside the technological.
Bio:Anne Marie Piper is an Associate Professor in the Department of Informatics at the University of California, Irvine. Her research in human-computer interaction focuses on designing and studying new technologies to support communication, social interaction, and learning for people across the lifespan. Her research is funded through four NSF awards, including a CAREER award, and has been recognized with numerous Best Paper Awards and Nominations at ACM CHI, CSCW, DIS, and ASSETS. She was named a U.S. National Academy of Sciences Kavli Fellow and received Northwestern’s Simon Award for Teaching Excellence and UC-San Diego’s Interdisciplinary Scholar Award. Anne Marie earned her PhD in Cognitive Science from the University of California, San Diego, MA in Education from Stanford University, and BS in Computer Science from Georgia Tech. Prior to joining UC-Irvine, she was a tenured faculty member at Northwestern University.
Mary Beth Kery, CMU/Apple
📅 April 13, 2021 (Tuesday), 1pm - 2pm
📍 Zoom
Many aspects of data/ML programming challenge our understanding of what makes for “good” software engineering. In my research I argue there’s something missing in Git and our classic version control approaches that fail to capture the needs of a data/ML workflow. Unlike iteration in standard software development, many stages of data/ML work require heavy (and very rapid) experimentation with different approaches, many of which may not work out. This kind of experiment-driven code we call exploratory programming. In this talk I will describe my research to better understand how exploratory programming impacts data/ML practitioners’ work, and develop new ways to support useful history of experimentation. With collaborators, I have designed a series of prototype tools that imagine new forms of automated version control to help data/ML workers. I discuss our design approach, involving multiple extensive design cycles and studies, each of which led to important clues about practitioners’ information needs and behaviors. I will share the culmination of this work with Verdant , and preliminary results from our latest study. Finally, I will discuss my own broader directions for future research and implications for designing solid developer tooling that serve the everyday programmer.
Bio:Mary Beth Kery (she/her) is a PhD candidate at the Human Computer Interaction Institute at Carnegie Mellon University, advised by Brad A Myers. Mary Beth’s research interests include everything about developer tools, with a general goal to improve developer experience for all. She is particularly interested in helping the average programmer handle challenging and sensitive issues with data.
Ryo Suzuki , University of Calgary
📅 March 30, 2021 (Tuesday), 1pm - 2pm
📍 Zoom
With the advent of immersive technologies, we can now more seamlessly blend our virtual and physical worlds than ever before. However, objects seen in AR/VR are currently only visual---the user cannot touch, feel, grasp, manipulate, and physically interact with virtual objects in the same way we do in the real world. The lack of this tangibility significantly limits our interaction and experience, but this is an inherent limitation of the current immersive technologies as AR/VR can only visually augment reality, remaining our physical world static and non-programmable. Then, how can we make our world programmable, not only through visually presented graphics but also through physically reconfigurable environments? In this talk, I explore the "reconfigurable reality", which aims to further blend our virtual and physical worlds through both visually and physically programmable environments. I illustrate this potential future by leveraging distributed swarm robots at different scales (from mm- to m-scale) which give us a means of physically displaying information, providing haptic sensations, reconfigure space, constructing graspable objects, and calmly supporting our everyday activities. I believe these robots will soon increasingly enter our everyday life and seamlessly weave themselves into the fabric of the living environment to make our physical world more adaptive, dynamic, and reconfigurable. This talk outlines how these collectively actuated interfaces can complement the current immersive technologies and illustrates how such ubiquitous robots can be a means of the future of human-computer interaction.
Bio:Ryo Suzuki is a new Assistant Professor in Computer Science at the University of Calgary. Prior to joining UCalgary in 2020, he was a Ph.D. student at the University of Colorado Boulder, where he was advised by Daniel Leithinger and Mark Gross. His research interest lies at the intersection between Human-Computer Interaction (HCI) and robotics. He explores how we can combine AR/VR and robotics technologies to make our environments programmable and further blend virtual and physical worlds. In the past 5 years, he has published more than fifteen peer-reviewed conference papers at top HCI and robotics venues, such as CHI, UIST, IROS, and received three awarded papers. Previously he also worked as research interns at Stanford University, UC Berkeley, the University of Tokyo, Adobe Research, and Microsoft Research.
Kristen Vaccaro, UC San Diego
📅 March 16, 2021 (Tuesday), 1pm - 2pm
📍 Zoom
Machine learning systems are increasingly a part of everyday users' lives, from search systems to social media news feeds. But the automated intelligence behind capabilities like curation, moderation, and recommendation has long been seen in contrast to individual agency and choice. In fact, debates over whether to prioritize machine learning or user control date back decades. My work seeks to improve users' experiences of algorithmic systems, particularly those involving human-AI collaboration, while maintaining user agency and control. In this talk, I will discuss that work and how can we design machine learning systems so that users can understand them, engage with them, and feel empowered in an increasingly automated world.
Bio:Kristen Vaccaro is an Assistant Professor in Computer Science & Engineering at the University of California San Diego, where her research focuses on designing algorithmic systems for user agency and control. Using qualitative and quantitative methods, she has studied existing systems and new prototypes to capture user understanding, behavior, and adaptations around mechanisms for control. Her work also explores questions of fairness, justice, and policy around such technologies.
Alexandra To, Northeastern University
📅 March 2, 2021 (Tuesday), 1pm - 2pm
📍 Zoom
Technology frequently marginalizes people from underrepresented and vulnerable groups; more and more, we’re learning how social media platforms, AI systems, machine learning algorithms, video games, etc., can enact, amplify, or perpetuate discrimination. In this talk, I will share two studies within a project that exemplify the methods I use for gathering personal narratives of marginalization and for developing and evaluating empowering games and social technologies. The CARE (coping after racist experiences) project uses interactive narrative to study how people experience, cope with, and seek support for interpersonal racism such as racist microaggressions. I will share participatory design work that brings people from marginalized backgrounds to the table in designing for a more empowered future as well as provotypes (provocative prototypes) resulting from that design work. I will end by proposing several promising avenues for future work that extends my work adapting critical race theory to HCI and games research.
Bio:Alexandra To is an Assistant Professor at Northeastern University jointly appointed in the Art + Design (Games) department in the College of Art, Media, and Design and the Khoury College of Computer Science. Her core research interests are in studying and designing social technologies to empower people in marginalized contexts. She uses qualitative methods to gather counterstories and participatory methods to design for the future. Alexandra is a racial justice activist, a critical race scholar, and game designer. She received her PhD in Human-Computer Interaction from Carnegie Mellon University. She previously received a B.S. and M.S. in Symbolic Systems from Stanford University with a focus on HCI and a minor in Asian American Studies. She has received multiple ACM Best Paper awards and published at CHI, UIST, CSCW, CHI Play, ToDiGRA, and DIS.
Philip Guo, UC San Diego
📅 September 10, 2019 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Modern-day programming is incredibly complex, and people from all sorts of backgrounds are now learning it. It is no longer sufficient just to learn how to code: one must also learn to work effectively with data and with the underlying software environment. In this talk, I will present three systems that I have developed to support learning of code, data, and environment, respectively: 1) Python Tutor is a run-time code visualization and peer tutoring system that has been used by over five million people in over 180 countries to form mental models and to help one another in real time, 2) DS.js uses the web as a nearly-infinite source of motivating real-world data to scaffold data science learning (UIST 2017 Honorable Mention Award). 3) Porta helps experts create technical software tutorials that involve intricate environmental interactions (UIST 2018 Best Paper Award). These systems collectively point toward a future where anyone around the world can gain the skills required to become a productive modern-day programmer.
Bio:Philip Guo is an assistant professor of Cognitive Science and an affiliate assistant professor of Computer Science and Engineering at UC San Diego. His research spans human-computer interaction, programming tools, and online learning. He now focuses on building scalable systems that help people learn computer programming and data science. He is the creator of Python Tutor (http://pythontutor.com/), a widely-used code visualization and collaborative learning platform. So far, over five million people in over 180 countries have used it to visualize over 100 million pieces of Python, Java, JavaScript, C, C++, and Ruby code. Philip's research has won Best Paper and Honorable Mention awards at the CHI, UIST, ICSE, and ISSTA conferences, and an NSF CAREER award. Philip received S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT and a Ph.D. in Computer Science from Stanford. His Ph.D. dissertation was one of the first to create programming tools for data scientists. Before becoming a professor, he built online learning tools as a software engineer at Google, a research scientist at edX, and a postdoc at MIT. Philip's website http://pgbovine.net/ contains over 600 articles, videos, and podcast episodes and gets over 750,000 page views per year.
Pattie Maes, MIT Media Lab
📅 April 30, 2019 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
While today's pervasive digital devices put the world’s information at our fingertips, they do not help us with some of the cognitive skills that are arguably more important to leading a successful and fulfilling life, such as attention, memory, motivation, creativity, mindful behavior, and emotion regulation. Building upon insights from psychology and neuroscience, the Fluid Interfaces group creates systems and interfaces for cognitive enhancement. Our designs enhance cognitive ability by teaching users to exploit and develop the untapped powers of their minds and by seamlessly supplementing users' natural cognitive abilities. Our solutions are compact and wearable, and are designed for real-world studies and interventions, rather than laboratory settings. Our work is highly interdisciplinary and combines insights and methods from human computer interaction, body sensor technologies, machine learning, brain computer interfaces, psychology, and neuroscience to create new opportunities for studying and intervening in human psychology in-the-wild.
Bio:Pattie Maes is a professor in MIT's Program in Media Arts and Sciences. She runs the Media Lab's Fluid Interfaces research group, which aims to radically reinvent the human-machine experience. Coming from a background in artificial intelligence and human-computer interaction, she is particularly interested these days in the topic of cognitive enhancement, or how immersive and wearable systems can actively assist people with memory, attention, learning, decision making, communication, and wellbeing. Maes is the editor of three books, and is an editorial board member and reviewer for numerous professional journals and conferences. She has received several awards: Fast Company named her one of 50 most influential designers (2011); Newsweek picked her as one of the "100 Americans to watch for" in the year 2000; TIME Digital selected her as a member of the “Cyber Elite,” the top 50 technological pioneers of the high-tech world; the World Economic Forum honored her with the title "Global Leader for Tomorrow"; Ars Electronica awarded her the 1995 World Wide Web category prize; and in 2000 she was recognized with the "Lifetime Achievement Award" by the Massachusetts Interactive Media Council. In addition to her academic endeavors, Maes has been an active entrepreneur as co-founder of several venture-backed companies, including Firefly Networks (sold to Microsoft), Open Ratings (sold to Dun & Bradstreet) and Tulip Co (privately held). Prior to joining the Media Lab, Maes was a visiting professor and a research scientist at the MIT Artificial Intelligence Lab. She holds a bachelor's degree in computer science and a PhD in artificial intelligence from the Vrije Universiteit Brussel in Belgium.
Dominik Moritz, University of Washington
📅 April 26, 2019 (Friday), 2pm - 3pm
📍 32-G882, Stata Center
Making sense of large and complex data requires methods that integrate human judgment and domain expertise with modern data processing systems. To meet this challenge, my work combines methods from visualization, data management, human-computer interaction, and programming languages to enable more effective and more scalable methods for interactive data analysis and communication. More specifically, my research investigates automatic reasoning over domain-specific representations of visualization and analysis workflows, in order to produce both improved human-centered designs and system performance optimizations. My work on Vega-Lite provides a high-level declarative language for rapidly creating interactive visualizations. Vega-Lite can serve as a convenient representation for tools that generate visualizations. To create effective designs, these tools must also consider perceptual principles of design. My work on Draco provides a formal model of visual encodings, a knowledge base to reason about visualization design decisions, and methods to learn design rules from experiments. Draco can formally reason over the visualization design space to recommend appropriate designs but its applications go far beyond. Draco makes theoretical design knowledge a shared resource that can be extended, tested, and systematically discussed in the research community. The Falcon and Pangloss systems enable scalable interaction and exploration of large data volumes by making principled trade-offs among people’s latency tolerance, precomputation, and approximation of computations. A recurring strategy across these projects is to leverage an understanding of people’s tasks and capabilities to inform system design and optimization.
Bio:Dominik Moritz is a Computer Science PhD candidate at the University of Washington. He works with Jeffrey Heer and Bill Howe in the Interactive Data Lab and the Database Group. Dominik’s research develops scalable interactive systems for visualization and analysis. His systems have won awards at premier academic venues and are available as open source projects with significant adoption by the Python and JavaScript data science communities.
Walter Lasecki, University of Michigan
📅 April 23, 2019 (Tuesday), 2pm - 3pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Intelligent systems hold the potential to enable natural, fluid, and efficient interactions with computational tools, but there is a snag: artificial intelligence (AI) is far from being able to understand (e.g., via natural language or vision) and reason about nuanced, real-world settings in full generality. While machine learning (ML) has had significant success on specific classes of problems, generating the massive, tailored training data sets that are needed to make these algorithms work across domains reliably remains a significant challenge. In this talk, I will show that we can use real-time crowdsourcing workflows to create robust intelligent systems that work in a broad range of interactive settings by scaffolding AI/ML capabilities with human intelligence. These scaffolds can facilitate and accelerate on-the-fly training, and are designed to gracefully progress towards full automation as AI becomes more effective in the coming decades. Further, this strategic combination of human and machine effort allows us to create systems that greatly exceed what either can do alone. I will conclude with a discussion of how the insights gained from designing these hybrid intelligence systems can inform richer human-AI interaction, and even allow us to fundamentally rethink how we approach work and organization at all scales.
Bio:Walter S. Lasecki is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor, where he is the founding director of the Center for Hybrid Intelligence Systems, and leads the Crowds+Machines (CROMA) Lab. He also previously co-directed the UM-IBM Sapphire Project center, a 20+ member initiative to advance conversational technologies. He and his students create interactive intelligent systems that are robust enough to be used in real-world settings by combining both human and machine intelligence to form Hybrid Intelligence Systems ("HyIntS") that are able to exceed the capabilities of both humans and machines alone. These systems help people be more productive, and improve access to the world for people with disabilities. Prof. Lasecki received his Ph.D and M.S. from the University of Rochester in 2015 and a B.S. in Computer Science and Mathematics from Virginia Tech in 2010. He has previously held visiting research positions at CMU, Stanford, Microsoft Research, and Google[x].
Steve Franconeri, Northwestern University
📅 April 09, 2019 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Your visual system evolved and develops to process the scenes, faces, and objects of the natural world. You then adapt that system to process the artificial world of graphs, maps, and data visualizations. This adaptation can lead alternatively to fast and powerful – or deeply slow and inefficient – visual processing. I’ll use interactive visual tasks to demonstrate the powerful capacity limits that arise when we extract structure and meaning from these artificial displays, which I will argue must occur via a slow serial language-like representation. Understanding these constraints leads to guidelines for display design and instruction techniques, across information dashboards, slide presentations, or STEM Education.
Bio:Steven Franconeri is a Professor of Psychology at Northwestern (Weinberg College), with courtesy appointments in Leadership (Kellogg School of Business) and Design (McCormick School of Engineering), and he serves as Director of the Northwestern Cognitive Science Program. His research is on visual thinking, visual communication, decision making, and the psychology of data visualization. Franconeri directs the Visual Thinking Laboratory, where a team of researchers explore how leveraging the visual system - the largest single system in your brain - can help people think, remember, and communicate more efficiently. The laboratory’s basic research questions are inspired by real-world problems, providing perspective for new and existing theories, while producing results that translate directly to science, education, design, and business.
Michelle Borkin, Northeastern
📅 March 19, 2019 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
What can help enable both the treatment of heart disease and the discovery of newborn stars? Visualization. Specifically interdisciplinary data visualization, the sharing and co-development of tools and techniques across domains. Visualization is a powerful tool for data exploration and analysis. With data ever-increasing in quantity, having effective visualizations is necessary for knowledge discovery and data insight. In this talk I will share sample results from my own research and experience crossing disciplines and bringing together the knowledge and experts of computer science, astrophysics, radiology and medicine. I will present new visualization techniques and tools inspired by this work for the astronomical and medical communities including Glue, a multi-dimensional linked-data visual exploration tool.
Bio:Dr. Michelle Borkin works on the development of novel visualization techniques and tools to enable new insights and discoveries in data. She works across disciplines to bring together computer scientists, doctors, and astronomers to collaborate on new analysis and visualization techniques, and cross-fertilize techniques across disciplines. Her research has resulted in the development of novel computer assisted diagnostics in cardiology, scalable visualization solutions for large network data sets, and novel astrophysical visualization tools and discoveries. Her main research interests include information and scientific visualization, hierarchical and multidimensional data representations, network visualization, visualization cognition, user interface design, human computer interaction (HCI), and evaluation methodologies. Dr. Borkin is an Assistant Professor in the Khoury College of Computer Sciences at Northeastern University. Prior to joining Northeastern, she was a Postdoctoral Research Fellow in Computer Science at the University of British Columbia, as well as Associate in Computer Science at Harvard and Research Fellow at Brigham & Women’s Hospital. She received her Ph.D. in Applied Physics at Harvard’s School of Engineering and Applied Sciences (SEAS) in 2014. She also has an MS in Applied Physics and a BA in Astronomy and Astrophysics & Physics from Harvard University. She was previously a National Science Foundation (NSF) Graduate Research Fellow, a National Defense Science and Engineering Graduate (NDSEG) Fellow, and a TED Fellow.
Brad A. Myers, CMU
📅 March 5, 2019 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Software engineers might think that human-computer interaction (HCI) is all about improving the interfaces for their target users through user studies. However, software engineers are people too, and they use a wide variety of technologies, from programming languages to search engines to integrated development environments (IDEs). And the field of HCI has developed a wide variety of human-centered methods, beyond lab user studies, which have been proven effective for answering many different kinds of questions. In this talk, I will use examples from my own research to show how HCI methods can be successfully used to improve the technologies used in the software development process. For example, "Contextual Inquiry" (CI) is a field study method that identifies actual issues encountered during work, which can guide research and development of tools that will address real problems. We have used CIs to identify nearly 100 different questions that developers report they find difficult to answer, which inspired novel tools for reverse-engineering unfamiliar code and for debugging. We used the HCI techniques of Paper Prototyping and Iterative Usability Evaluations to improve our programming tools. Through the techniques of Formal User Studies, we have validated our designs, and quantified the potential improvements. Current work is directed at improving the usability of APIs, using user-centered methods to create a more secure Blockchain programming language, addressing the needs of data analysts who do exploratory programming, helping programmers organize information found on the web, and helping end-user programmers augment what intelligent agents can do on smartphones.
Bio:Brad A. Myers is a Professor in the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University. He was chosen to receive the ACM SIGCHI Lifetime Achievement Award in Research in 2017, for outstanding fundamental and influential research contributions to the study of human-computer interaction. He is an IEEE Fellow, ACM Fellow, member of the CHI Academy, and winner of 12 Best Paper type awards and 5 Most Influential Paper Awards. He is the author or editor of over 500 publications, including the books "Creating User Interfaces by Demonstration" and "Languages for Developing User Interfaces," and he has been on the editorial board of six journals. He has been a consultant on user interface design and implementation to over 85 companies, and regularly teaches courses on user interface design and software. Myers received a PhD in computer science at the University of Toronto where he developed the Peridot user interface tool. He received the MS and BSc degrees from the Massachusetts Institute of Technology during which time he was a research intern at Xerox PARC. From 1980 until 1983, he worked at PERQ Systems Corporation. His research interests include user interfaces, programming environments, programming language design, end-user software engineering (EUSE), API usability, developer experience (DevX or DX), interaction techniques, programming by example, handheld computers, and visual programming. He belongs to ACM, SIGCHI, IEEE, and the IEEE Computer Society.
Karrie Karahalios, UIUC
📅 February 12, 2019 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Algorithms play a large role by shaping what we see and don’t see online. In this talk I discuss people’s awareness of algorithms in they daily online social life, the power people attribute to algorithms, and how and when people become disillusioned by them. I further discuss two approaches to address control and whether people want control.
Bio:Karrie Karahalios is a Professor of Computer Science, a Co-director of the Center for People and Infrastructures at the University of Illinois at Urbana-Champaign, and a Senior Research Scientist at Adobe Research. She completed an S.B. in Electrical Engineering, an M.Eng. in Electrical Engineering and Computer Science, and an S.M. and Ph.D in Media Arts and Sciences at MIT. Her main area of research is Social Computing—more specifically, social network analysis, relationship modeling, social media interface design, social media feed algorithm awareness/literacy, social visualization, group dynamics, speech delay assistive technologies, and tools for speech-delay diagnoses. She has been awarded a Sloan Research Fellowship, a Harvard Berkman Center for Internet and Society Fellowship, a Kavli Fellowship, the A. Richard Newton Breakthrough Research Award, an NSF Early Career Award, and an NCSA Fellowship, among others.
Steve Hodges, Microsoft
📅 Dec 10, 2018 (Monday), 4pm - 5pm
📍 Star Seminar Room, Stata Center (32-D463)
In the first part of this talk I will describe SenseCam, one of the first wearable cameras to be developed, and its application in support of patients with memory impairments. As a researcher who aims to seed new types of hardware device in the market and change people’s perceptions of how they can use technology, in many ways SenseCam was the ‘perfect’ project. The device was adopted enthusiastically, both by memory-impaired patients wishing to improve their recall, and by researchers and clinicians as a tool to support their work. Unfortunately, in the long-term SenseCam has not (yet) proven to be a viable commercial product. Despite recent advances in tools, processes and resources in support of hardware design and prototyping, I believe it’s actually becoming more difficult to make the transition from research prototype to commercially viable product. In the second part of the talk I will present my perspectives on why this might be, along with some ideas about how it might be addressed. My ultimate aim is to enable a ‘long tail’ of hardware products, which fuel innovation in the device space whilst simultaneously providing greater customer choice.
Bio:Steve Hodges joined Microsoft’s Cambridge research lab in 2004 with the ambition of building hardware systems that change people’s perceptions of technology and how it can be used. He founded and led the Sensors and Devices research group, exploring emerging hardware technologies and creating compelling novel interactive devices and experiences for individuals, communities and organizations. Through collaborations within Microsoft and with external partners, many of his innovations have successfully made the transition to product. Steve subsequently led hardware strategy and development for the Azure Sphere connected device security solution. Steve’s technical expertise spans interactive systems, connected devices, wireless communications, novel sensing and displays, embedded camera systems, location systems, energy management, security, wearable technologies and rapid prototyping. He collaborates widely to conceive, develop and evaluate and disseminate creative new ideas in domains which include the internet of things, mobile computing, assistive technologies, computer science education and the developer and maker communities. By combining hardware-related research insights with emerging technologies he aims to seed new concepts, tools and technologies in the market.
Jim Hollan, UC San Diego
📅 Dec 11, 2018 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Ideas have histories and like people can only be fully appreciated in the context of their histories. Bill Buxton describes what he terms the long nose of innovation and heralds the wisdom of mining and drawing inspiration from past research. The objective of my talk today is to twofold. First, I will follow Bill’s advice and reflect on my past and current research to identify the underlying ideas worth future mining. Second, I will argue for the promise of a new project to develop a cognitive physics for information designed to ease information-based tasks by operating in accordance with cognitively motivated rules sensitive to tasks, personal and group interaction histories, and context.
Bio:Jim Hollan is Professor of Cognitive Science and Computer Science at UC San Diego and Co-Director of the Design Lab. After completing a postdoc in AI at Stanford, the early part of his career was spent on the faculty at UCSD, working with Ed Hutchins and Don Norman and leading the Intelligent Systems Group. He also consulted at Xerox Parc. He then led the MCC Human Computer Interaction Lab and established the Computer Graphics and Interactive Media research group at Bellcore. He then became Chair of the Computer Science Department at the University of New Mexico and subsequently returned to UCSD in 1997 with appointments in the Department of Cognitive Science and Department of Computer Science and Engineering. In 2003 he was elected to the Association of Computing Machinery’s CHI Academy as one who “has made extensive contributions to the study of HCI and led the shaping of the field.” In 2015 he received the ACM CHI Lifetime Research award and recently was honored with the title of Distinguished Professor of Cognitive Science at UC San Diego.
Dzmitry Tsetserukou, Skolkovo Institute of Science and Technology
📅 Dec 3, 2018 (Monday), 11am - 12pm
📍 Kiva Seminar Room, Stata Center (32-G449)
We propose a novel interaction strategy for a human-swarm communication when a human operator guides a formation of quadrotors with impedance control and receives tactile feedback. The presented approach takes into account the human hand velocity and changes the formation shape and dynamics accordingly using impedance interlinks simulated between quadrotors, which helps to achieve a life-like swarm behavior. Experimental results with Crazyflie 2.0 quadrotor platform validate the proposed control algorithm. The tactile patterns representing dynamics of the swarm (extension or contraction) are proposed. The user feels the state of the swarm at his fingertips and receives valuable information to improve the controllability of the complex life-like formation. The user study revealed the patterns with high recognition rates. Subjects stated that tactile sensation improves the ability to guide the drone formation and makes the human-swarm communication much more interactive. The proposed technology can potentially have a strong impact on the human swarm interaction, providing a new level of intuitiveness and immersion into the swarm navigation.
Bio:Dzmitry Tsetserukou received the Ph.D. degree in Information Science and Technology from the University of Tokyo, Japan, in 2007. From 2007 to 2009, he was a JSPS Post-Doctoral Fellow at the University of Tokyo. He worked as Assistant Professor at the Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology from 2010 to 2014. From August 2014 he works at Skolkovo Institute of Science and Technology as Head of Intelligent Space Robotics Laboratory. Dzmitry is a member of the Institute of Electrical and Electronics Engineers (IEEE) since 2006 and the author of over 70 technical publications, 3 patents, and a book. His research interests include swarm of drones, wearable haptic and tactile displays, robot manipulator design, telexistence, human-robot interaction, affective haptics, virtual reality, and artificial intelligence. Dzmitry is the winner of the Best Demonstration Award (Bronze prize, AsiaHaptics 2018), Laval Virtual Award (ACM Siggraph 2016), Best Presentation Award (IRAGO 2013), Best Paper Award (ACM Augmented Human 2010). He was an organizer of the first Workshop on Affective Haptics at IEEE Haptics Symposium 2012.
Ranjitha Kumar, University of Illinois at Urbana-Champaign
📅 Dec 4, 2018 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Having access to the right types of data at scale is increasingly the key to designing innovation. In this talk, I’ll discuss how my group has created original datasets for three domains —mobile app design, fashion retail, and social networks — and leveraged them to build novel user experiences. First, I’ll present a system for capturing and aggregating interaction data from third-party Android apps to identify effective mobile design patterns: open sourcing analytics that were previously locked away. Next, I’ll discuss fashion data collected with Wizard of Oz chatbots, used to model deep learning frameworks for automating personal styling advice. Finally, I’ll introduce an emoji-based social media designed to incentivize curation and map the “taste graph” of its users.
Bio:Ranjitha Kumar is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC), where she leads the Data-Driven Design group. She is the recipient of a 2018 NSF CAREER award, and UIUCs 2018 C.W. Gear Outstanding Junior Faculty Award. Her research has won best paper awards/nominations at premier conferences in HCI, and is supported by grants from Google, Amazon, and Adobe. She received her PhD from the Computer Science Department at Stanford University in 2014, and was formerly the Chief Scientist at Apropose, Inc., a data-driven design company she founded that was backed by Andreessen Horowitz and New Enterprise Associates.
John Stasko, Georgia Institute of Technology
📅 Nov 27, 2018 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Everyone’s talking about data these days. People, organizations, and businesses are seeking better ways to analyze, understand, and communicate their data. While a variety of approaches can be taken to address this challenge, my own work has focused on data visualization. In this talk, I’ll describe the unique advantages and benefits that visualization provides, and I’ll support these arguments through examples from recent projects in my research group. Two specific themes that I’ll emphasize are the importance of interaction in visualization and the challenge of determining a visualization’s value and utility.
Bio:John Stasko is a Regents Professor in the School of Interactive Computing (IC) at the Georgia Institute of Technology, where he has been on the faculty since 1989. His Information Interfaces Research Group develops ways to help people and organizations explore, analyze, and make sense of data to solve problems. Stasko is a widely published and internationally recognized researcher in the areas of information visualization and visual analytics, approaching each from a human-computer interaction perspective. He has received Best Paper or Most Influential/Test of Time Paper awards from the IEEE InfoVis and VAST, ACM CHI, INTERACT, and ICSE conferences. Stasko has been Papers/Program Co-Chair for the IEEE Information Visualization (InfoVis) and the IEEE Visual Analytics Science and Technology (VAST) Conferences and has served on multiple journal editorial boards. He received the IEEE Visualization and Graphics Technical Committee (VGTC) Visualization Technical Achievement Award in 2012, and was named an ACM Distinguished Scientist in 2011, an IEEE Fellow in 2014, and a member of the ACM CHI Academy in 2016. In 2013 he also became an Honorary Professor in the School of Computer Science at the Univ. of St. Andrews in Scotland.
Kanit "Ham" Wongsuphasawat, Apple
📅 Nov 19, 2018 (Monday), 2pm - 3pm
📍 Stata Center (32G-882)
Visualization is a critical tool for data science. Analysts use plots to explore and understand distributions and relationships in their data. Machine learning developers also use diagrams to understand and communicate complex model structures. Yet visualization authoring requires a lot of manual efforts and non-trivial decisions, demanding that the authors have a lot of expertise, discipline, and time in order to effectively visualize and analyze the data. My research in human-computer interaction focuses on the design of tools that augment visualization authoring with automated design and recommendation. By automating repetitive parts of authoring while preserving user control to guide the automation, people can leverage their domain knowledge and creativity to achieve their goals more effectively with fewer efforts and human errors. In my PhD dissertation, I have developed new formal languages and systems for chart specification and recommendation including the Vega-Lite visualization grammar and the CompassQL query language. On top of on these languages, I have developed and studied graphical interfaces that enable new forms of recommendation-powered visual data exploration including the Voyager visualization browser and Voyager 2, which blends manual and automated chart authoring in a single tool. To help developers inspect deep learning architecture, I also built a tool that combines automatic layout techniques with user interaction to visualize dataflow graphs of TensorFlow models as a part of TensorBoard, TensorFlow’s official dashboard tool. These projects have won awards at premier academic venues, and are used by the Jupyter/Python data science communities and leading tech companies including Apple, AirBnB, Google, Microsoft, Netflix, Twitter, and Uber.
Bio:Kanit "Ham" Wongsuphasawat is a research scientist at Apple where he works on visualization and interactive systems for data science and machine learning. Kanit has a PhD in Computer Science from the University of Washington (UW), where he worked with Jeffrey Heer and the Interactive Data Lab on visualization tools. Kanit also previously worked at a number of leading data-driven technology companies including Google, Tableau Software, Thomson Reuters, and Trifacta.
Jennifer Golbeck, University of Maryland
📅 Nov 13, 2018 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Online sharing combined with opaque mass surveillance and powerful analytic tools has led us to a place where data is collected and transformed into incredibly personal insights, often without users' knowledge or consent. This impacts the information they see, the way they interact, and it can be used in deeply manipulative ways. This talk will look at users' feelings about these practices and how they tie back to classic sociological understandings of trust, power, and privacy. I discuss possible ways forward to avoid an impending dystopia, especially in light of GDPR on one side and Chinese Social Credit on the other.
Bio:Jen Golbeck is a Professor in the College of Information Studies at the University of Maryland, College Park. Her research focuses on artificial intelligence and social media, privacy, and trust on the web. Her dogs are also famous on the internet and she runs their social media empire at theGoldenRatio4 on all platforms. She received an AB in Economics and an SB and SM in Computer Science at the University of Chicago, and a Ph.D. in Computer Science from the University of Maryland, College Park.
Felienne Hermans, TU Delft
📅 Nov 06, 2018 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
In education, there is and has always been debate about how to teach. One of these debates centers around the role of the teacher: should their role be minimal, allowing students to find and classify knowledge independently, or should the teacher be in charge of what happens in the classroom, explaining students all they need to know? These forms of teaching have many names, but the most common ones are exploratory learning and direct instruction respectively. While the debate is not settled, more and more evidence is presented by researchers that explicit direct instruction is more effective than exploratory learning in teaching language and mathematics and science. These findings raise the question whether that might be true for programming education too. This is especially of interest since programming education is deeply rooted in the constructionist philosophy, leading many programmers to follow exploratory learning methods, often without being aware of it. This talk outlines this history of programming education and additional beliefs in programming that lead to the prevalence of exploratory forms of teaching. We subsequently explain the didactic principles of direct instruction, explore them in the context of programming, and hypothesize how it might look like for programming.
Bio:I am an assistant professor at Delft University of Technology, where where I research end-user programming. End-user programming is programming for everyone that does not think of themselves as a programmer. In my PhD dissertation I worked on applying methods from software engineering to spreadsheets. During my PhD I founded a company called Infotron, which sells a tool called PerfectXL based on techniques I developed to spot errors in spreadsheets. Me, my research and my company have gotten some media coverage over the last years. One of my biggest passions in life is to share my enthusiasm for programming/tech with others. I teach a bunch of kids LEGO Mindstorms programming every Saturday in a local community center. Furthermore, I am one of the founders of the Joy of Coding conference, a one day developer conference in Rotterdam and one of the hosts of the Software Engineering Radio podcast, one of the biggest software podcasts on the web. When I am not coding, blogging or teaching, I am probably dancing Lindy Hop with my beau Rico, out running, watching a movie or playing a (board)game.
Lex Fridman, MIT
📅 Oct 30, 2018 (Tuesday), 1pm - 2pm
📍 Stata Center (32G-882)
I will present a human-centered paradigm for building autonomous vehicle systems, contrasting it with how the problem is currently formulated and approached in academia and industry. The talk will include discussion and video demonstration of new work on driver state sensing, voice-based transfer of control, annotation of large-scale naturalistic driving data, and the challenges of building and testing a human-centered autonomous vehicle at MIT.
Bio:Lex Fridman is a research scientist at MIT, working on deep learning approaches to perception, control, and planning in the context of semi-autonomous vehicles and more generally human-centered artificial intelligence systems. His work focuses on learning-based methods that leverage large-scale, real-world data. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, and activity recognition. Before joining MIT, Lex was at Google leading deep learning efforts for large-scale behavior-based authentication. Lex is a recipient of a CHI-17 best paper award and a CHI-18 best paper honorable mention award.
Jessica Hullman, Northwestern University
📅 Oct 16, 2018 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Charts, graphs, and other information visualizations amplify cognition by enabling users to visually perceive trends and differences in quantitative data. While guidelines dictate how to choose visual encodings and metaphors to support accurate perception, it is less obvious how to design visualizations that encourage rational decisions from a statistical perspective. I'll motivate two challenges that must be overcome to support effective reasoning with visualizations. First, people's intuitions about uncertainty often conflict with statistical definitions. I'll describe research in my lab that shows how visualization techniques for conveying uncertainty through discrete samples can improve non-experts' ability to understand and make decisions from distributional information. Second, people often bring prior beliefs and expectations about data-driven phenomena to their interactions with data (e.g., I thought unemployment was down this year) which influence their interpretations. Most design and evaluation techniques do not account for these influences. I'll describe what we've learned by developing and studying visualization interfaces that encourage reflecting on data in light of one's own or others' prior knowledge. I'll conclude by reflecting on how better representations of uncertainty and prior knowledge can contribute to a Bayesian model of visualization interpretation.
Bio:Jessica Hullman is an Assistant Professor in Computer Science and Journalism at Northwestern. The goal of her research is to develop computational tools that improve how people reason with data. She is particularly inspired by how science and data are presented to non-expert audiences in data and science journalism, where a shift toward digital news provides opportunities for informing through interactivity and visualization. Her work has provided automated tools and empirical findings around the use of visualizations to support communication and reasoning. Her current research focuses on how understandable presentations of uncertainty and interactive visualizations that enable users to articulate and reason with prior beliefs can transform how lay people and analysts alike interact with data. Jessica has received numerous paper awards from top Visualization and HCI venues, and is the recipient of an NSF CRII and CAREER Awards among other grants. Prior to joining Northwestern in 2018, she spent three years as an Assistant Professor at the University of Washington Information School. She completed her Ph.D. at the University of Michigan and spent a year as the inaugural Tableau Software Postdoctoral Scholar in Computer Science at the University of California Berkeley in 2014 prior to joining the University of Washington in 2015.
Chris Olah, OpenAI
📅 Sept 18, 2018 (Tuesday), 1pm - 2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
How can we understand the inner workings of neural networks? Neural networks greatly exceed anything humans can design directly at computer vision by building up their own internal hierarchy of internal visual concepts. So, what are they detecting? How do they implement these detectors? How do they fit together to create the behavior of the network as a whole? At a more practical level, can we use these techniques to audit neural networks? Or find cases where the right decision is made for bad reasons? To allow human feedback on the decision process, rather than just the final decision? Or to improve our ability to design models?
Bio:Chris Olah is best known for DeepDream, the Distill journal, and his blog. He spent five years at Google Brain, where he focused on neural network interpretability and safety. He's also worked on various other projects, including early TensorFlow, generative models, and NLP. Prior to Google Brain, Chris dropped out of university and did deep learning research independently as a Thiel Fellow. Chris will be joining OpenAI in October to start a new interpretability team there.
Dr. Niki Kittur , Carnegie Mellon University
📅 May 07, 2018 (Monday), 11am - 12pm
📍 Kiva Seminar Room, Stata Center (32-G449)
A fundamental problem in the world is that the explosion of information is making it take longer and longer to learn any given domain. This leads to serious challenges for learning and decision making, whether deciding which programming API to use, what to do after a cancer diagnosis, or where to go in an unfamiliar city. Furthermore, creative breakthroughs in science and technology often come from finding analogies between multiple domains, exponentially compounding the problem. In this talk I discuss our efforts over the past 10 years towards addressing this problem by building a universal knowledge accelerator: a platform in which the sensemaking people engage in online is captured and made useful for others, leading to virtuous cycles of constantly improving information sources that in turn help people more effectively synthesize and innovate. I will demonstrate how tapping into the deep cognitive processing of the human mind can lead to fundamental advances in AI and help other users more deeply understand their data. I conclude by posing a grand challenge of capturing the deep cognitive processing involved in complex web search (1/10th of all labor hours) and developing new AI systems that can help scaffold future users' knowledge and creativity.
Bio:Aniket (Niki) Kittur is an Associate Professor and holds the Cooper-Siegel Chair in the Human-Computer Interaction Institute at Carnegie Mellon University. His research looks at how we can augment the human intellect using crowds and computation. He has authored and co-authored more than 80 peer-reviewed papers, 14 of which have received best paper awards or honorable mentions. Dr. Kittur is a Kavli fellow, has received an NSF CAREER award, the Allen Newell Award for Research Excellence, major research grants from NSF, NIH, Google, and Microsoft, and his work has been reported in venues including Nature News, The Economist, The Wall Street Journal, NPR, Slashdot, and the Chronicle of Higher Education. He received a BA in Psychology and Computer Science at Princeton, and a PhD in Cognitive Psychology from UCLA.
Dr. J. Nathan Matias, MIT Media Lab, Princeton University
📅 April 10, 2018 (Tuesday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Today's social technologies observe and intervene in the lives of billions of people, exercising tremendous power in society. Experimentation infrastructures, which manage tens of thousands of behavioral studies a year, offer one avenue for guiding the use and accountability of platform power. In this talk, I will describe CivilServant, a citizen behavioral science infrastructure that supports the public to test ideas for a fairer, safer, more understanding internet - independently of the tech industry. Communities with tens of millions of people have used CivilServant to test ideas for responding to human/algorithmic misinformation, preventing harassment, managing politically-partisan conflict, and monitoring the effects of AI law enforcement systems on civil liberties. As social technologies grow in power and reach, many have come to expect that they should address enduring social problems including misinformation, conflict, and public health. Consequently, engineers, designers, and data scientists are becoming policymakers whose systems govern the behavior of humans and machines at scale. Re-engineering these systems for public interest purposes in democratic societies requires substantial changes in the design, accountability, and statistical capabilities of software for mass experimentation.
Bio:Dr. J. Nathan Matias is a computer scientist and social scientist who organizes citizen behavioral science for a fairer, safer and more understanding internet. He advances this work in collaboration with tens of millions of people through his nonprofit CivilServant and as a postdoc at the Princeton University departments of Psychology, the Center for Information Technology Policy, and Sociology. Before Princeton, Nathan completed a PhD at the MIT Media Lab's Center for Civic Media, served as a fellow at Harvard's Berkman Klein Center, worked in tech startups that have reached over a billion devices, and helped start a series of education and journalistic charities. His journalism has appeared in The Atlantic, PBS, the Guardian, and other international media.
Susan R. Fussell, Cornell University
📅 Feb 20, 2018 (Tuesday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Computer-mediated communication (CMC) tools and social media potentially allow people to interact fluidly across national, cultural and linguistic boundaries in ways that would have been difficult if not impossible in the past. To date, however, however, much of this potential fails to be realized. A single individual is unlikely to be fluent in a wide array of languages. The use of a lingua franca such as English permits a degree of interaction with speakers of other native languages, but it can have negative effects on non-native speakers. Advances in machine translation (MT) and other technologies could allow people to communicate with one another in their native language, but translation errors can create sizeable misunderstandings when MT is used in conversational settings. In a series of studies, Prof. Fussell's students and her have been exploring the problem space of inter-lingual communication, with the goals of better understanding the challenges of interaction across language boundaries and of informing the design of new tools to support this interaction. Prof.Fussell will first describe two interview studies exploring how the need to use a non-native language affects communication and coordination in both formal and informal settings. She will then describe several tools we have developed to make MT more usable in everyday conversation and present the results of lab studies evaluating these tools. Taken together, these studies help help advance the area of inter-lingual computer-mediated communication.
Bio:Susan R. Fussell is a Liberty Hyde Bailey Professor in the Department of Communication and the Department of of Information Science at Cornell University. She received her BS degree in psychology and sociology from Tufts University, and her Ph.D. in social and cognitive psychology from Columbia University. Dr. Fussell's primary interests lie in the areas of computer-supported cooperative work and computer-mediated communication. Her current projects focus on intercultural and multilingual communication, telepresence robotics, collaborative intelligence analysis, public deliberation, and tools to motivate people to reduce their energy usage.
Daniel Wigdor, University of Toronto
📅 Nov 28, 2017 (Tuesday), 2pm-3pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Despite the continued development of individual technologies and processes for supporting human endeavors, major leaps in solving complex human problems will require advances in system-level thinking and orchestration. In this talk, I describe efforts to design, build, and study Computational Ecosystems that interweave community process, social structures, and intelligent systems to unite people and machines to solve complex problems and advance human values at scale. Computational ecosystems integrate various components to support ecosystem function; the interplay among components synergistically advances desired values and problem solving goals in ways that isolated technologies and processes cannot. Taking a systems approach to design, computational ecosystems emphasize (1) computational thinking to decompose and distribute problem solving to diverse people or machines most able to address them; and (2) ecological thinking to create sustainable processes and interactions that support jointly the goals of ecosystem members and proper ecosystem function. I present examples of computational ecosystems designed to advance community-based planning and research training, that respectively engages thousands of people in planning an event and empowers a single faculty member to provide authentic research training to 20+ students. These solutions demonstrate how to combine wedges of human and machine competencies into integrative technology-supported, community-based solutions. I will preview what's ahead for computational ecosystems, and close with a few thoughts on the role of computing technologies in advancing human values at scale.
Bio:Daniel Wigdor is an associate professor of computer science and co-director of the Dynamic Graphics Project at the University of Toronto. His research is in the area of human-computer interaction, with major areas of focus in the architecture of highly-performant UI�s, on development methods for ubiquitous computing, and on post-WIMP interaction methods. Before joining the faculty at U of T in 2011, Daniel was a researcher at Microsoft Research, the user experience architect of the Microsoft Surface Table, and a company-wide expert in user interfaces for new technologies. Simultaneously, he served as an affiliate assistant professor in both the Department of Computer Science & Engineering and the Information School at the University of Washington. Prior to 2008, he was a fellow at the Initiative in Innovative Computing at Harvard University, and conducted research as part of the DiamondSpace project at Mitsubishi Electric Research Labs. He is co-founder of Iota Wireless, a startup dedicated to the commercialization of his research in mobile-phone gestural interaction, and of Tactual Labs, a startup dedicated to the commercialization of his research in high-performance, low-latency user input. For his research, he has been awarded an Ontario Early Researcher Award (2014) and the Alfred P. Sloan Foundation�s Research Fellowship (2015), as well as best paper awards or honorable mentions at CHI 2016, CHI 2015, CHI 2014, Graphics Interface 2013, CHI 2011, and UIST 2004. Three of his projects were selected as the People�s Choice Best Talks at CHI 2014 and CHI 2015. Daniel is the co-author of Brave NUI World | Designing Natural User Interfaces for Touch and Gesture, the first practical book for the design of touch and gesture interfaces. He has also published dozens of other works as invited book chapters and papers in leading international publications and conferences, and is an author of over three dozen patents and pending patent applications. Daniel�s is sought after as an expert witness, and has testified before courts in the United Kingdom and the United States. Further information, including publications and videos demonstrating some of his research, can be found at www.dgp.toronto.edu/~dwigdor.
Haoqi Zhang, Northwestern University
📅 Nov 7, 2017 (Tuesday), 2pm-3pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Despite the continued development of individual technologies and processes for supporting human endeavors, major leaps in solving complex human problems will require advances in system-level thinking and orchestration. In this talk, I describe efforts to design, build, and study Computational Ecosystems that interweave community process, social structures, and intelligent systems to unite people and machines to solve complex problems and advance human values at scale. Computational ecosystems integrate various components to support ecosystem function; the interplay among components synergistically advances desired values and problem solving goals in ways that isolated technologies and processes cannot. Taking a systems approach to design, computational ecosystems emphasize (1) computational thinking to decompose and distribute problem solving to diverse people or machines most able to address them; and (2) ecological thinking to create sustainable processes and interactions that support jointly the goals of ecosystem members and proper ecosystem function. I present examples of computational ecosystems designed to advance community-based planning and research training, that respectively engages thousands of people in planning an event and empowers a single faculty member to provide authentic research training to 20+ students. These solutions demonstrate how to combine wedges of human and machine competencies into integrative technology-supported, community-based solutions. I will preview what's ahead for computational ecosystems, and close with a few thoughts on the role of computing technologies in advancing human values at scale.
Bio:Haoqi Zhang is the Allen K. and Johnnie Cordell Breed Junior Chair of Design and assistant professor in Computer Science at Northwestern University. His work advances the design of integrated socio-technical models that solve complex problems and advance human values at scale. His research bridges the fields of Human-Computer Interaction, Artificial Intelligence, Social & Crowd Computing, Learning Science, and Decision Science, and is generously supported by National Science Foundation grants in Cyber-Human Systems, Cyberlearning, and the Research Initiation Initiative. Haoqi received his PhD in Computer Science and BA in Computer Science and Economics from Harvard University. At Northwestern he founded and directs the Design, Technology, and Research (DTR) program, which provides an original model for research training for 50 graduate and undergraduate students. With Matt Easterday, Liz Gerber, and Nell O'Rourke, Haoqi co-directs the Delta Lab, an interdisciplinary research lab and design studio across computer science, learning science, and design.
Andy van Dam, Emanuel Zgraggen, Luke Murray, and Bob Zeleznik, Brown University
📅 Nov 1, 2017 (Wednesday), 10am-11am
📍 Kiva Seminar Room, Stata Center (32-G449)
In this talk we will present two current research projects from our Pen- and Touch Computing lab at Brown University.
First, we will first demonstrate Vizdom (and it's processing backend IDEA) which are being developed in collaboration with Professor Tim Kraska's database management group and are sponsored by NSF and DARPA awards, as well as by gifts from Microsoft Research and Adobe. Vizdom is a pen- and touch-based interactive data exploration application with three salient features: 1) An emphasis on progressive computation that we argue (and to some degree tested in usability studies) greatly improves the user experience on larger datasets. 2) A tight integration of visualizations, machine learning and statistics all within the same tool and through an accessible interaction paradigm with the goal to empower "data-enthusiasts" - people who are not mathematicians or programmers, and only know a bit of statistics. 3) Embedding visual data exploration in a statistical framework to prevent common problems and statistical pitfalls (i.e., multiple comparisons problem).
In the second part of our talk, we demonstrate, Dash, an early-stage prototype of an integrated environment for document-based knowledge work, enhanced with pen- and touch interactions; this work is sponsored by Microsoft Research and Adobe. With Dash we aim to streamline common knowledge worker tasks by allowing users to create, collect and relate heterogeneous documents in both structured and free-form workspaces. In contrast to most applications which have special purpose databases that aren't exposed as databases, Dash not only allows application-specific views but also exposes database views of its document and metadata information. This allows computational operators and data visualizations to be applied to any feature of the repository. Thus, with Dash, users create, as a byproduct of their natural workflow, custom "dashboards" on their data since Dash treats all searches, visualizations and layouts as first class interactive documents on par with all other documents.
Andries van Dam, is the Thomas J. Watson Jr. University Professor of Technology and Education and Professor of Computer Science at Brown University. He has been a member of Brown's faculty since 1965, was a co-founder of Brown's Computer Science Department and its first Chairman from 1979 to 1985, and was also Brown's first Vice President for Research from 2002 - 2006. His research includes work on computer graphics, hypermedia systems, post-WIMP and natural user interfaces (NUI), including pen- and touch computing, and educational software. He has been working for over four decades on systems for creating and reading electronic books with interactive illustrations for use in teaching and research. He is a Fellow of ACM, IEEE, and AAAS, a member of the National Academy of Engineering, and the American Academy of Arts & Sciences. He has received the ACM Karl V. Karlstrom Outstanding Educator Award, the SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics, and the IEEE Centennial Medal, and holds four honorary doctorates from Darmstadt Technical University in Germany, Swarthmore College, the University of Waterloo in Canada, and ETH Zurich. Emanuel Zgraggen received his Fachhochschuldiplom in Informatik from HSR Hochschule fur Technik Rapperswil in Switzerland and his MS in computer science from Brown University. He is currently a PhD candidate at Brown University working in the graphics group and is advised by Professor Andy van Dam and Professor Tim Kraska. His main research areas are Human Computer Interaction, Information Visualization and Data Science. Robert Zeleznik is Director of User Interface Research for Brown University's Computer Graphics Group. He has worked broadly in the area of post-WIMP and pen-based human computer interaction, having over two decades of experience developing both 2D and 3D gestural user interfaces and interaction techniques. In addition, he has worked extensively in the application domains of 2D drawing and 3D modeling, scientific and information visualization, and hypermedia.
Pedro Lopes, Hasso Plattner Institute
📅 Oct 17, 2017 (Tuesday), 2pm-3pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Today's interfaces get closer and closer to our body and are now literally attached to it, e.g., wearable devices and virtual reality headsets. These provide a very direct and immersive interaction with virtual worlds. But what if, instead, these interfaces were a "part of our body"? In this talk I introduce the idea of an interactive system based on electrical muscle stimulation (EMS). EMS is a technique from medical rehabilitation in which a signal generator and electrodes attached to the user's skin are used to send electrical impulses that involuntarily contract the user's muscles. While EMS devices have been used to regenerate lost motor functions in rehabilitation medicine since the '60s, it has only been a few years since researchers started to explore EMS as a means for creating interactive systems. These more recent projects, including six of our projects, explore EMS as a means for teaching users new motor skills, increasing immersion in virtual experiences by simulating impact and walls in VR/AR, communicating with remote users and allowing users to read & write information using eyes-free wearable devices.
Bio:Pedro is a researcher at Human Computer Interaction Lab at the Hasso Plattner Institute, Germany. Pedro's work is published at ACM CHI/UIST and demonstrated at venues such as ACM SIGGRAPH and IEEE Haptics. Pedro has received the ACM CHI Best Paper award for his work on Affordance++, several nominations and exhibited at Ars Electronica 2017. His work also captured the interest of media, such as MIT Technology Review, NBC, Discovery Channel, NewScientist or Wired. (Learn more about Pedro's work here). Selected Youtube links: VR Walls, Muscle Plotter, Affordance++.
Rob Jacob, Tufts University
📅 Sep 26, 2017 (Tuesday), 2pm-3pm 📍 Kiva Seminar Room, Stata Center (32-G449)
Abstract:Implicit user interfaces obtain information from their users passively, typically in addition to mouse, keyboard, or other explicit inputs. They fit into the emerging trends of physiological computing and affective computing. Our work focuses on using brain input for this purpose, measured through functional near-infrared spectroscopy (fNIRS), as a way of increasing the narrow communication bandwidth between human and computer. Most previous brain-computer interfaces have been designed for people with severe motor disabilities and use explicit signals as the primary input; but these are too slow and inaccurate for wider use. Instead, we use brain measurement to obtain more information about the user and their context directly and without asking additional effort from them. We have obtained good results in a number of systems we created, as measured by objective task performance metrics. I will discuss our work on brain-computer interfaces and the more general area of implicit interaction. I will also discuss our concept of Reality-Based Interaction (RBI) as a unifying framework that ties together a large subset of emerging new, non-WIMP user interfaces. It attempts to connect current paths of research in HCI and to provide a framework that can be used to understand, compare, and relate these new developments. Viewing them through the lens of RBI can provide insights for designers and allow us to find gaps or opportunities for future development. I will briefly discuss some past work in my research group on a variety of next generation interfaces such as tangible interfaces and implicit eye movement-based interaction techniques.
Bio:Robert Jacob is a Professor of Computer Science at Tufts University, where his research interests are new interaction modes and techniques and user interface software; his current work focuses on implicit brain-computer interfaces. He has been a visiting professor at the University College London Interaction Centre, Universite Paris-Sud, and the MIT Media Laboratory. Before coming to Tufts, he was in the Human-Computer Interaction Lab at the Naval Research Laboratory. He received his Ph.D. from Johns Hopkins University, and he is a member of the editorial board for the journal Human-Computer Interaction and a founding member for ACM Transactions on Computer-Human Interaction. He has served as Vice-President of ACM SIGCHI, Papers Co-Chair of the CHI and UIST conferences, and General Co-Chair of UIST and TEI. He was elected as a member of the ACM CHI Academy in 2007 and as an ACM Fellow in 2016.
Meredith Ringel Morris, Microsoft Research
📅 May 23, 2017 (Tuesday), 1pm-2pm
📍 Hewlett Reading Room, Stata Center (32G-882)
ALS (amyotrophic lateral sclerosis) is a degenerative neuromuscular disease; people with late-stage ALS typically retain cognitive function, but lose the motor ability to speak, relying on gaze-controlled AAC (augmentative and alternative communication) devices for interpersonal interactions. State-of-the-art AAC technologies used by people with ALS do not facilitate natural communication; gaze-based AAC communication is extremely slow, and the resulting synthesized speech is flat and robotic. This lecture presents a series of novel technology prototypes from the Microsoft Research Enable team that aim to address the challenges of improving the expressivity of AAC for people with ALS.
Bio:Meredith Ringel Morris is a Principal Researcher at Microsoft Research, where she is affiliated with the Ability, Enable, and neXus research teams. She is also an affiliate faculty member at the University of Washington, in both the department of Computer Science and Engineering and the School of Information. Dr. Morris earned a Ph.D. in computer science from Stanford University in 2006, and also did her undergraduate work in computer science at Brown University. Her primary research area is human-computer interaction, specifically computer-supported cooperative work and social computing. Her current research focuses on the intersection of CSCW and Accessibility (“social accessibility”), creating technologies that facilitate people with disabilities in connecting with others in social and professional contexts. Past research contributions include foundational work in facilitating cooperative interactions in the domain of surface computing, and in supporting collaborative information retrieval via collaborative web search and friendsourcing.
Nicola Dell, Cornell Tech University
📅 May 16, 2017 (Tuesday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
The goal of my research is to design, build, deploy, and evaluate novel computing systems that improve the lives of underserved populations in low-income regions. As computing technologies become affordable and accessible to diverse populations across the globe, it is critical that we expand the focus of HCI research to study the social, technical, and infrastructural challenges faced by these diverse communities and build systems that address problems in critical domains such as health care and education. In this talk, I describe my general approach to building technologies for underserved communities, including identifying opportunities for technology, conducting formative research to fully understand the space, developing novel technologies, iteratively testing and deploying, evaluating with target populations, and handing off to global development organizations for long-term sustainability.
Bio:Nicki Dell is an Assistant Professor in Information Science at Cornell Tech. Her research spans Human-Computer Interaction (HCI) and Information and Communication Technologies for Development (ICTD) with a focus on designing, building, and evaluating novel computing systems that improve the lives of underserved populations in low-income regions. Nicki’s research and outreach activities have been recognized through numerous paper awards and fellowships. Nicki was born and raised in Zimbabwe and received a B.Sc. in Computer Science from the University of East Anglia (UK) in 2004, and an M.S. and Ph.D. in Computer Science and Engineering from the University of Washington in 2011 and 2015 respectively.
Jeff Huang, Brown University
📅 May 2, 2017 (Tuesday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
I will present work that leverages user behavioral data to build personalized applications, which I call "behavior-powered systems". Two applications use online user interactions: 1) WebGazer uses interaction data made on any website to continuously calibrate a webcam-based eye tracker, so that users can manipulate any web page solely by looking. 2) Drafty tracks interactions with a detailed table of computer science professors to ask the crowd of readers to help keep structured data up-to-date by inferring their interests. And two applications use mobile sensing data: 3) SleepCoacher uses smartphone sensors to capture noise and movement data while people sleep to automatically generate recommendations about how to sleep better through a continuous cycle of mini-experiments. 4) Rewind uses passive location tracking on smartphones to recreate a person’s past memory through a fusion of geolocation, street side imagery, and weather data. Together, these systems show how subtle footprints of user behavior collected remotely can reimagine the way we gaze at websites, improve our sleep, experience the past, and maintain changing data.
Bio:Jeff Huang is an Assistant Professor in Computer Science at Brown University. His research in human-computer interaction focuses on behavior-powered systems, spanning the domains of mobile devices, personal informatics, and web search. Jeff’s Ph.D. is in Information Science from the University of Washington in Seattle, and his masters and undergraduate degrees are in Computer Science from the University of Illinois at Urbana-Champaign. Before joining Brown, he analyzed search behavior at Microsoft Research, Google, Yahoo, and Bing, and co-founded World Blender, a Techstars-backed company that made geolocation mobile games. Jeff has been a Facebook Fellow and has received a Google Research Award and NSF CAREER Award.
Wendy Mackay, INRIA
📅 April 21, 2017 (Friday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Incredible advances in hardware have not been matched by equivalent advances in software; we remain mired in the graphical user interface of the 1970s. I argue that we need a paradigm shift in how we design, implement and use interactive systems. Classical artificial intelligence treats the human user as a cog in the computer's process -- the so-called “human-in-the-loop”; Classical human-computer interaction focuses on creating and controlling the 'user experience'. We seek a third approach -- a true human-computer partnership, which takes advantage of machine learning, but leaves the user in control. I describe a series of projects that illustrate our approach to making interactive systems discoverable, appropriable and expressive, using the principles of instrumental interaction and reciprocal co-adaptation. The goal is to create robust interactive systems that significantly augment human capabilities and are actually worth learning over time.
Bio:Wendy Mackay is a Research Director, Classe Exceptionnelle, at Inria, France, where she heads the ExSitu (Extreme Situated Interaction) research group in Human-Computer Interaction at the Université Paris-Saclay. After receiving her Ph.D. from MIT, she managed research groups at Digital Equipment and Xerox EuroPARC, which were among the first to explore interactive video and tangible computing. She has been a visiting professor at University of Aarhus and Stanford University and recently served as Vice President for Research at the University of Paris-Sud. Wendy is a member of the ACM CHI academy, is a past chair of ACM/SIGCHI, chaired CHI'13 and received the ACM/SIGCHI Lifetime Acheivement Service Award. She also received the prestigious ERC Advanced Grant for her research on co-adaptive instruments. She has published over 150 peer-reviewed research articles in the area of Human-computer Interaction. Her current research interests include human-computer partnerships, co-adaptive instruments, creativity, mixed reality and interactive paper, and participatory design and research methods.
Bjoern Hartmann, University of California, Berkeley
📅 April 19, 2017 (Wednesday), 2pm-3pm
📍 Kiva Seminar Room, Stata Center (32-G449)
My group's research in Human-Computer Interaction focuses on design, prototyping and implementation tools for the era of ubiquitous embedded computing and digital fabrication. We focus especially on supporting the growing ranks of amateur designers and engineers in the Maker Movement. Over the past decade, a resurgence in interest how the artifacts in our world are designed, engineered and fabricated has led to new approaches for teaching art and engineering; new methods for creating artifacts for personal use; and new models for launching hardware products. The Maker Movement is enabled by a confluence of new technologies like digital fabrication and a sharing ethos built around online tutorials and open source design files. A crucial missing building block are appropriate design tools that enable Makers to translate their intent into appropriate machine instructions - whether code or 3D prints. Makers’ expertise and work practices differ significantly from those of professional engineers - a reality that design tools have to reflect.
I will present research that enables Makers and designers to rapidly prototype, fabricate and program interactive products. Making headway in this area involves working in both hardware and software. Our group creates new physical fabrication hardware such as augmented power tools and custom CNC machines; new design software to make existing digital fabrication tools more useful; software platforms for the type of connected IoT devices many Makers are creating; and debugging tools for working at the intersection of hardware and software. We also create expertise sharing tools that lower the cost and increase the quality of online tutorials and videos through which knowledge is disseminated in this community.
Our work on these tools is motivated by the daily experience of teaching and building in the Jacobs Institute for Design Innovation - a 24,000 sq ft space for 21st-century design education that opened in 2015. I will give an overview of institute activities and projects, and how they inform our research agenda.
Bjoern Hartmann is an Associate Professor in EECS at UC Berkeley. He is the faculty director of the new Jacobs Institute for Design Innovation. He previously co-founded the CITRIS Invention Lab and also co-directs the Berkeley Institute of Design. His research has received numerous Best Paper Awards at top Human-Computer Interaction conferences, a Sloan Fellowship, an Okawa Research Award and an NSF CAREER Award. He received both the Diane S. McEntyre Award and the Jim and Donna Gray Faculty Award for Excellence in Teaching. He completed his PhD in Computer Science at Stanford University in 2009, and received degrees in Digital Media Design, Communication, and Computer and Information Science from the University of Pennsylvania in 2002. Before academia, he had a previous career as the owner of an independent record label and as a traveling DJ.
Brian Scassellati, Yale University
📅 March 21, 2017 (Tuesday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
For the past 15 years, I have been building robots that teach social and cognitive skills to children. Typically, we construct these robots to be social partners, to engage individuals with social skills that encourage that person to respond to the robot as a social agent rather than as a mechanical device. Most of the time, interactions with artificial agents (both robots and virtual characters) follow the same rules as interactions with people.
The first part of this talk will focus on how human-robot interactions are uniquely different from both human-agent interactions and human-human interactions. These differences, taken together, provide a case for why robots might be unique tools for learning.
The second part of this talk will describe some of our ongoing work on building robots that teach. In particular, I will describe some of the efforts to use robots to enhance the therapy and diagnosis of autism spectrum disorder.
Brian Scassellati is a Professor of Computer Science, Cognitive Science, and Mechanical Engineering at Yale University and Director of the NSF Expedition on Socially Assistive Robotics. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills.
Dr. Scassellati received his Ph.D. in Computer Science from the Massachusetts Institute of Technology in 2001. His dissertation work (Foundations for a Theory of Mind for a Humanoid Robot) with Rodney Brooks used models drawn from developmental psychology to build a primitive system for allowing robots to understand people. His work at MIT focused mainly on two well-known humanoid robots named Cog and Kismet.
Dr. Scassellati's research in social robotics and assistive robotics has been recognized within the robotics community, the cognitive science community, and the broader scientific community. He was named an Alfred P. Sloan Fellow in 2007 and received an NSF CAREER award in 2003. His work has been awarded five best-paper awards. He was the chairman of the IEEE Autonomous Mental Development Technical Committee from 2006 to 2007, the program chair of the IEEE International Conference on Development and Learning (ICDL) in both 2007 and 2008, and the program chair for the IEEE/ACM International Conference on Human-Robot Interaction (HRI) in 2009.
Akane Sano, MIT Media Lab
📅 March 7, 2017 (Tuesday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
Sleep, stress and mental health have been major health issues in
modern society. Poor sleep habits and high stress, as well as
reactions to stressors and sleep habits, can depend on many factors.
Internal factors include personality types and physiological factors
and external factors include behavioral, environmental and social
factors. What if 24/7 rich data from mobile devices could identify
which factors influence your bad sleep or stress problem and provide
personalized early warnings to help you change behaviors, before
sliding from a good to a bad health condition such as depression?
In my talk, I will present a series of studies and systems we have
developed to investigate how to leverage multi-modal data from
mobile/wearable devices to measure, understand and improve mental
wellbeing.
Akane Sano is a Research Scientist at MIT Media Lab, Affective Computing Group. Her research focuses on mobile health and affective computing. She has been working on measuring and understanding stress, sleep, mood and performance from ambulatory human long-term data and designing intervention systems to help people be aware of their behaviors and improve their health conditions. She completed her PhD at the MIT Media Lab in 2015. Before she came to MIT, she worked for Sony Corporation as a researcher and software engineer on wearable computing, human computer interaction and personal health care. Recent awards include the AAAI Spring Symposium Best Presentation Award and MIT Global Fellowship.
Iyad Rahwan, MIT Media Lab
📅 February 21, 2017 (Tuesday), 2pm-3pm
📍 Kiva Seminar Room, Stata Center (32-G449)
This talk explores the physical and cognitive limits of crowds, by following a number of real-world experiments that utilized social media to mobilize the masses in tasks of unprecedented complexity. From finding people in remote cities, to reconstructing shredded documents, the power of crowdsourcing is real, but so are exploitation, sabotage, and hidden biases that undermine the power of crowds.
Bio:Iyad Rahwan is the AT&T Career Development Professor and an Associate Professor of Media Arts & Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. He holds a PhD from the University of Melbourne, Australia, and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS).
Rubaiat Habib, Autodesk Research
📅 February 7, 2017 (Tuesday), 1pm-2pm
📍 Kiva Seminar Room, Stata Center (32-G449)
In this talk, I am going to present and demo our award winning research initiative on creating custom animations - Project Draco. Project Draco was recently released as Sketchbook Motion, and was featured by Apple as "The best iPad app of the year 2016". With Project Draco, we investigate the question of how we can enable everyone to bring life to otherwise static drawings—how can we make animation as easy as sketching a static image? Most of us experience the power of animated media every day: animation makes it easy to communicate complex ideas beyond verbal language. However, only few of us have the skills to express ourselves through this medium. By making animation as easy, accessible, and fluid as sketching, I intend to make dynamic drawings a powerful medium to think, create, and communicate rapidly.
Bio:Rubaiat Habib is a Sr. Research Scientist, artist, and designer at Autodesk Research. His research interest lies at the intersection of Computer Graphics and HCI for creative thinking, design, and storytelling. Rubaiat received several awards for his work including two ACM CHI Best Paper Nominations, ACM CHI and ACM UIST Peoples’ choice best talk awards, and ACM CHI Golden Mouse awards for best research videos. For his PhD at the National University of Singapore, Rubaiat also received a Microsoft Research Asia PhD fellowship. Rubaiat’s research in dynamic drawings and animation is regularly turned into new products reaching a global audience. As a freelance cartoonist and designer, he contributed to a number of magazines, books, and newspapers.
Alice Oh, KAIST
May 6, 2014. 2pm-3pm. Location: 32-G449
Marti Hearst, UC Berkeley
April 11, 2014. 3pm-4pm. Location: 32-D463
Brian Bailey, University of Illinois at Urbana-Champaign
April 4, 2014. 2pm-3pm. Location: 32-G449
Chris Parnin, Georgia Institute of Technology
March 13, 2014. 11am-12pm. Location: 32-G449
Mc Schraefel, University of Southampton
November 22, 2013. 1pm-2pm. Location: 32-G449
Megan Monroe, University of Maryland
December 13, 2013. 1pm-2pm. Location: 32-G449
Wendy Mackay, Inria & Université Paris-Sud
December 12, 2013. 2pm-3pm. Location: 32-G449
Monica Lam, Stanford University
April 10, 2013. 4pm-5pm. Location: 32-G449
Paul André, Carnegie Mellon University
March 8, 2013. 11am-12pm. Location: 32-G882
Aron Pilhofer, The New York Times
December 14, 2012. 1pm-2pm. Location: 32-G449
Eric Gilbert, Georgia Tech
December 7, 2012. 1pm-2pm. Location: 32-G449
Mark Guzdial, Georgia Tech
November 30, 2012. 1-2pm. Location: 32-G449
Amy Ogan, Carnegie Mellon University
November 16, 2012. 1pm-2pm. Location: 32-G449
Chris Harrison, Carnegie Mellon University
November 15, 2012. 3pm-4pm. Location: 32-G449
Jeff Bigham, University of Rochester
November 9, 2012. 1pm-2pm. Location: 32-G882
Steven Dow, Carnegie Mellon University
November 2, 2012. 1pm-2pm. Location: 32-G449
Mark Ackerman, University of Michigan
April 27, 2012. 2pm-3pm. Location: 32-G449
Björn Hartmann, University of California, Berkeley
April 26, 2012. 11am-12pm. Location: 32-G449
Aniket Kittur, Carnegie Mellon University
April 6, 2012. 2pm-3pm. Location: 32-G449
Remco Chang, Tufts University
March 16, 2012. 2pm-3pm. Location: 32-G449
Tovi Grossman, Autodesk Research
March 2, 2012. 2pm-3pm. Location: 32-G449
Adrian Kuhn, University of British Columbia
December 14, 2011. 2pm-3pm. Location: 32-G449
Panos Iperiotis, NYU
December 9, 2011. 1pm-2pm. Location: 32-G449
Desney Tan, Microsoft Research
December 7, 2011. 1pm-2pm. Location: 32-G882
Daniel Wigdor, University of Toronto
December 2, 2011. 1pm-2pm. Location: 32-G449
Lada Adamic, Univerisity of Michigan
November 18, 2011. 1pm-2pm. Location: 32-G882
Rebecca Fiebrink, Princeton University
November 4, 2011. 1pm-2pm. Location: 32-G449
Michael Terry, University of Waterloo
October 28, 2011. 1pm-2pm. Location: 32-G449
Liz Gerber, Northwestern University
April 29, 2011. 1pm-2pm. Location: 32-G449
Cliff Lampe, Michigan State University
April 22, 2011. 1pm-2pm. Location: 32-G449
Andy Ko, University of Washington
April 15, 2011. 1pm-2pm. Location: 32-G449
John Riedl, University of Minnesota
April 8, 2011. 1pm-2pm. Location: 32-G449
Nicole Ellison, Michigan State University
April 1, 2011. 1pm-2pm. Location: 32-G449
Adam Perer, IBM Research
March 18, 2011. 1pm-2pm. Location: 32-G449
Joel Brandt, Adobe Systems
February 25, 2011. 1pm-2pm. Location: 32-G449
Ken Perlin, NYU
February 18, 2011. 1pm-2pm. Location: 32-G449
Mc Schraefel, University of Southampon
February 4, 2011. 1pm-2pm. Location: 32-G449
Jennifer Lee, Knight News Challenge
November 19, 2010. 1pm-2pm. Location: 32-G449
Mira Dontcheva, Adobe Research
October 15, 2010. 1pm-2pm. Location: 32-D463
David Ayman Shamma, Yahoo! Research
October 8, 2010. 1pm-2pm. Location: 32-G449
Chris Schmandt, MIT Media Lab
May 7, 2010. 1pm-2pm. Location: 32-G449
Nick Bilton, New York Times
April 9, 2010. 1pm-2pm. Location: 32-G449
Niki Kittur, Carnegie Mellon University
February 19, 2010. 1pm-2pm. Location: 32-G449
Karrie Karahalios, University of Illinois at Urbana-Champaign
February 12, 2010. 1pm-2pm. Location: 32-G449
Elizabeth Churchill, Yahoo! Research
December 4 2009. 1pm-2pm. Location: 32-G449
Danah Boyd, Microsoft Research and Harvard Berkman Center for Internet and Society
November 20 2009. 1pm-2pm. Location: 32-G449
Jonathan Grudin, Microsoft Research
November 13 2009. 1pm-2pm. Location: 32-D463
Michael Muller and N Sadat Shami, IBM Research and IBM Center for Social Software
November 6 2009. 1pm-2pm. Location: 32-G449
Chieko Asakawa, IBM Research Tokyo
October 30 2009. 1pm-2pm. Location: 32-G449
Steve Whittaker, IBM Research Almaden
October 23 2009. 1pm-2pm. Location: 32-G449
Jeff Nichols, IBM Research
October 2 2009. 1pm-2pm. Location: 32-G449
Douglas Crockford, Yahoo! Research
September 18 2009. 1pm-2pm. Location: 32-G449
DJohn Stasko, Georgia Institute of Technology
September 11 2009. 1pm-2pm. Location: 34-401
William Jones, University of Washington
May 5, 2009. 11am-12pm. Location: 32-G449
Orit Shaer, Wellesley
May 1, 2009. 2pm-3pm. Location: 32-G449
Patrick Baudisch, Microsoft Research and Hasso Plattner Institute
April 10, 2009. 2pm-3pm. Location: 32-G449
Michel Beaudouin-Lafon, Université de Paris-Sud
April 10, 2009. 11am-12pm. Location: 32-D463
Jaime Teevan, Microsoft Research
April 3, 2009. 2pm-3pm. Location: 32-G449
Takeo Igarashi, University of Tokyo
March 30, 2009. 2pm-3pm. Location: 32-D463
Joseph Lawrance, Oregon State University
March 13, 2009. 2pm-3pm. Location: 32-G449
Belle Tseng, Yahoo!
March 3, 2009. 11am-12pm. Location: 32-D463
Krzysztof Gajos, Harvard University and Microsoft Research
February 27, 2009. 2pm-3pm. Location: 32-G449
Ruth Rosenholtz, MIT Brain and Cognitive Sciences
February 20, 2009. 2pm-3pm. Location: 32-G449
Ed Chi, PARC
February 3, 2009. 11am-12pm. Location: 32-G449
Saul Greenberg, University of Calgary
December 3, 2008. 11am-12pm. Location: 32-G449
Ryan Lesser and Dan Schmidt, Harmonix
November 21, 2008. 3pm-4pm. Location: 32-155
François Guimbretière, University of Maryland Human-Computer Interaction Lab
November 14, 2008. 2pm-3pm. Location: 32-G449
Jill Freyne, University College Dublin
November 7, 2008. 2pm-3pm. Location: 32-G449
Khai N. Truong, University of Toronto
October 17, 2008. 2pm-3pm. Location: 32-G449
Maneesh Agrawala, UC Berkeley
October 10, 2008. 1pm-2pm. Location: 32-G449
Merrie Morris, Microsoft Research
September 26, 2008. 2pm-3pm. Location: 32-D463
Li-Te Cheng and Steven Rohall, IBM Research Cambridge
September 5, 2008. 2pm-3pm. Location: 32-G449
Ronald Baecker, Knowledge Media Design Institute and Dept of Computer Science, University of Toronto
May 8, 2008. 2pm-3pm. Location: 32-G449
Dan Olsen, Brigham Young University
May 2, 2008. 2pm-3pm. Location: 32-G449
Andreas Paepke, Stanford University
April 18, 2008. 2pm-3pm. Location: 32-G449
Tessa Lau, IBM Research Almaden
March 21, 2008. 2pm-3pm. Location: 32-G449
Candy Sidner, BAE Systems AIT
March 14, 2008. 3pm-4pm. Location: 32-D463
Harry West, Continuum Design
March 7, 2008. 2pm-3pm. Location: 32-G449
Mark Ashdown, MIT Humans and Automation Laboratory
February 29, 2008. 2pm-3pm. Location: 32-G449
Scott Hudson, Carnegie Mellon HCI Institute
February 7, 2008. 2pm-3pm. Location: 32-G449
Daniel Wigdor, University of Toronto
November 30, 2007. 2pm-3pm. Location: 32-G449
Jeff Heer, UC Berkeley
November 16, 2007. 2pm-3pm. Location: 32-G449
Irene Greif, IBM Research
November 9, 2007. 2pm-3pm. Location: 32-G449
Amy Bruckman, Georgia Institute of Technology
November 2, 2007. 2pm-3pm. Location: 32-G449
Ryen White, Microsoft Research
October 23, 2007. 1pm-2pm. Location: 32-G449
Scott Klemmer, Stanford University
October 5, 2007. 2pm-3pm. Location: 32-G449
Jason Hong, Carnegie Mellon HCI Institute
September 28, 2007. 2pm-3pm. Location: 32-G449
Jon Herlocker, Oregon State University and Smart Desktop, Inc.
September 21, 2007. 2pm-3pm. Location: 32-G449
Bill Buxton, Microsoft Research
September 14, 2007. 1pm-2pm. Location: 32-141
Katherine Isbister, Rensselaer Polytechnic Institute
May 11, 2007. 1:30pm-2:30pm. Location: 32-D463
Stacey Scott, MIT Humans and Automation Lab
April 27, 2007. 1:30pm-2:30pm. Location: 32-D463
Marti Hearst, School of Information, UC Berkeley
April 20, 2007. 1:30pm-2:30pm. Location: 32-D463
Christopher R. Wren, Mitsubishi Electric Research Laboratories
April 6, 2007. 1:30pm-2:30pm. Location: 32-D463
Alfred Kobsa, Department of Informatics, University of California, Irvine
March 30, 2007. 1:30pm-2:30pm. Location: 32-D463
Jeremy Bailenson, Stanford University
March 16, 2007. 1:30pm-2:30pm. Location: 32-D463
Mor Naaman, Yahoo! Research Berkeley
February 23, 2007. 1:30pm-2:30pm. Location: 32-D463
Brian P. Bailey, University of Illinois
February 16, 2007. 1:30pm-2:30pm. Location: 32-D463
Gregory Abowd, Georgia Tech
February 2, 2007. 1:30pm-2:30pm. Location: 32-G449
Harold Thimbleby, Swansea University
January 11, 2007. 4pm-5pm. Location: 32-G449
Steven Drucker, Microsoft Research (Live Labs)
November 30, 2006. 11am-12pm. Location: 32-G449
Mc Schraefel, University of Southampton
November 17, 2006. 1:30pm-2:30pm. Location: 32-D463
Mary Ellen Zurko, IBM
October 20, 2006. 1:30pm-2:30pm. Location: 32-D463
Peter Tarasewich, Northeastern University
October 6, 2006. 1:30pm-2:30pm. Location: 32-D463
Karen Holtzblatt, InContext Enterprises
September 29, 2006. 1:30pm-2:30pm. Location: 32-D463
James Lin, IBM Research Almaden
September 20, 2006. 11am-12pm. Location: 32-G449
Patrick Baudisch, Microsoft Research
July, 28 2006. 1:30pm-2:30pm. Location: 32-D463
Bill Barnert, SavaJe Technologies
May, 5 2006. 1:30pm-2:30pm. Location: 32-G449
Kazuhiro Otsaka, NTT
April, 28 2006. 2pm-3pm. Location: 32-G449
Pekka Ketola, Nokia
April, 7 2006. 1:30pm-2:30pm. Location: 32-G449
Fritz Knabe, Endeca
March, 17 2006. 1:30pm-2:30pm. Location: 32-G449
Michael Muller, IBM Research Cambridge
March 10, 2006. 1:30pm-2:30pm. Location: 32-G449
Paul Lukowicz, UMIT, Innsbruck, Austria and ETH Zurich, Switzerland
December 2, 2005. 3pm-4pm. Location: 32-G449
William Gribbons, Design and Usability Center, Bentley College
November 18, 2005. 1:30pm-2:30pm. Location: 32-G449
Martin Wattenberg, IBM Watson Research Center
November 4, 2005. 1:30pm-2:30pm. Location: 32-G449
Tamara Adlin, Amazon Services
October 25, 2005. 11am-12pm. Location: 32-G449
Ben Shneiderman, University of Maryland, College Park
October 18, 2005. 4pm-5pm. Location: 32-G449
Stephen Intille, MIT House
October 14, 2005. 1:30pm-2:30pm. Location: 32-G449
Beth Logan, Hewlett Packard
October 7, 2005. 1:30pm-2:30pm. Location: 32-G449
Marina Bers, Tufts Univeristy
September 30, 2005. 1:30pm-2:30pm. Location: 32-G449
Judith Tabolt Matthews, University of Pittsburgh, School of Nursing
September 23, 2005. 1:30pm-2:30pm. Location: 32-G449
Jared Spool, User Interface Engineering
September 16, 2005. 1:30pm-2:30pm. Location: 32-G449
Ted Selker, MIT Media Lab
September 9, 2005. 1:30pm-2:30pm. Location: 32-D463
Edward Tse, University of Calgary
September 2, 2005. 1:30pm-2:30pm. Location: 32-D463
David MacKay, University of Cambridge
July 21, 2005. 10am-11am. Location: 32-G449
Pattie Maes, MIT Media Lab
May 13, 2005. 1:30pm-2:30pm. Location: 32-G449
Chia Shen, MERL
April 29, 2005. 1:30pm-2:30pm. Location: 32-G449
Steven Feiner, Columbia University
April 22, 2005. 1:30pm-2:30pm. Location: 32-G449
Susan T. Dumais, Microsoft Research
April 13, 2005. 4:00pm-5:00pm. Location: 32-G449
Michael Muller, IBM Watson Research Center
February 25, 2005. 1:30pm-2:30pm. Location: 32-G449
Daniel Weld, University of Washington
February 18, 2005. 1:30pm-2:30pm. Location: 32-G449
Holly Yanco, University of Massachusetts, Lowell
February 4, 2005. 1:30pm-2:30pm. Location: 32-G449
Sean W. Smith, Dartmouth College
December 10, 2004. 1:30-2:30pm. Location: 32-G449
Demetrios Karis, Verizon Labs
November 19, 2004. 1:30-2:30pm. Location: 32-G449
Gary Marchionini, School of Information and Library Science, University of North Carolina
November 12, 2004. 1:30-2:30pm. Location: 32-G449
Tom Igoe, NYU
November 5, 2004. 1:30-2:30pm. Location: 32-D463
Barbara J. Grosz, Harvard University
October 29, 2004. 1:30-2:30pm. Location: 32-G449
Gregory D. Abowd, College of Computing and GVU Center, Georgia Institute of Technology
October 22, 2004. 1:30-2:30pm. Location: 32-G449
Margrit Betke, Boston University
October 15, 2004. 1:30-2:30pm. Location: 32-G449
William Jones, The Information School, University of Washington
October 12, 2004. 3:00-4:00pm. Location: 32-D463
Hiroshi Ishii, Tangible Media Group, MIT Media Lab
October 8, 2004. 1:30-2:30pm. Location: 32-G449
Missy Cummings, MIT Humans and Automation Lab
October 1, 2004. 1:30-2:30pm. Location: 32-G449
Robert C. Miller, MIT CSAIL
September 24, 2004. 1:30-2:30pm. Location: 32-G449
Ben Bederson, Human Computer Interaction Lab, University of Maryland
September 17, 2004. 1:30-2:30pm. Location: 32-G449
Carol Neidle and Robert G. Lee, Boston University
June 9, 2004. 11am - 12pm. Location: 32-D507
Rosalind Picard, MIT Media Lab
April 16, 2004. 1:30pm - 2:30pm. Location: 32-500
Mary Czerwinski, Microsoft Research
April 9, 2004. 1:30pm - 2:30pm. Location: 34-401B
Henry Lieberman, MIT Media Lab
February 13, 2004. 1:30pm - 2:30pm. Location: NE43/941
Ruth Rosenholtz, MIT BCS
December 5, 2003. 1:30pm. Location: NE43-941
Rob Capra, Center for HCI, Virginia Tech
November 21, 2003. 1:30pm. Location: NE43-941
Judith Donath, MIT Media Lab
November 14, 2003. 1:30pm. Location: NE43-941
Jared Spool, User Interface Engineering
October 31, 2003. 1:30pm. Location: NE43-941
Michael Muller, Werner Geyer and Beth Brownholt, IBM Watson Research Center
October 24, 2003. 1:30pm. Location: NE43-941
David Brown and Mark Claypool, Worcester Polytechnic Institute
October 17, 2003. 1:30pm. Location: NE43-941
Tom Erickson, IBM Watson Research Center
October 10, 2003. 1:30pm. Location: NE43-941
Tom Tullis, Fidelity Investments
September 26, 2003. 1:30pm. Location: NE43-941
Candy Sidner, MERL
September 19, 2003. 1:30pm. Location: NE43-941
Henry Lieberman, MIT Media Lab
September 5, 2003. 1:30pm. Location: NE43-941
Andrew Senior, IBM Watson Research Center
July 24, 2003. 4:00pm. Location: NE43-941
Stacey Scott, University of Calgary
August 13 2004. 1:00pm-2:00pm. Location: 32-397
Mark Maybury, MITRE
May 16, 2003. 1:30pm-2:30pm. Location: NE43-941
Brad Myers, Human Computer Interaction Institute, CMU
May 9, 2003. 1:30pm-2:30pm. Location: NE43-941
Irene Greif, IBM Watson Research Center
May 2, 2003. 1:30pm-2:30pm. Location: NE43-941
Stephen Intille, MIT House
April 18, 2003. 1:30pm-2:30pm. Location: NE43-941
Nicole Yankelovich, Sun Microsystems Laboratories
April 11, 2003. 1:30pm-2:30pm. Location: NE43-941
Joe Marks, MERL
March 14, 2003. 1:30pm-2:30pm. Location: NE43-941
Neil Heffernan, Worcester Polytechnic Institute
March 7, 2003. 1:30pm-2:30pm. Location: NE43-941
Robert J.K. Jacob, Tufts University
February 28, 2003. 1:30pm-2:30pm. Location: NE43-941
Phil Cohen, Oregon Health and Science University
February 21, 2003. 1:30pm-2:30pm. Location: NE43-941