ALS (amyotrophic lateral sclerosis) is a degenerative neuromuscular disease; people with late-stage ALS typically retain cognitive function, but lose the motor ability to speak, relying on gaze-controlled AAC (augmentative and alternative communication) devices for interpersonal interactions. State-of-the-art AAC technologies used by people with ALS do not facilitate natural communication; gaze-based AAC communication is extremely slow, and the resulting synthesized speech is flat and robotic. This lecture presents a series of novel technology prototypes from the Microsoft Research Enable team that aim to address the challenges of improving the expressivity of AAC for people with ALS.
Meredith Ringel Morris is a Principal Researcher at Microsoft Research, where she is affiliated with the Ability, Enable, and neXus research teams. She is also an affiliate faculty member at the University of Washington, in both the department of Computer Science and Engineering and the School of Information. Dr. Morris earned a Ph.D. in computer science from Stanford University in 2006, and also did her undergraduate work in computer science at Brown University.
Her primary research area is human-computer interaction, specifically computer-supported cooperative work and social computing.
Her current research focuses on the intersection of CSCW and Accessibility (“social accessibility”), creating technologies that
facilitate people with disabilities in connecting with others in social and professional contexts.
Past research contributions include foundational work in facilitating cooperative interactions in the domain of surface computing,
and in supporting collaborative information retrieval via collaborative web search and friendsourcing.
The goal of my research is to design, build, deploy, and evaluate novel computing systems that improve the lives of underserved populations in low-income regions. As computing technologies become affordable and accessible to diverse populations across the globe, it is critical that we expand the focus of HCI research to study the social, technical, and infrastructural challenges faced by these diverse communities and build systems that address problems in critical domains such as health care and education. In this talk, I describe my general approach to building technologies for underserved communities, including identifying opportunities for technology, conducting formative research to fully understand the space, developing novel technologies, iteratively testing and deploying, evaluating with target populations, and handing off to global development organizations for long-term sustainability.
Nicki Dell is an Assistant Professor in Information Science at Cornell Tech. Her research spans Human-Computer Interaction (HCI) and Information and Communication Technologies for Development (ICTD) with a focus on designing, building, and evaluating novel computing systems that improve the lives of underserved populations in low-income regions. Nicki’s research and outreach activities have been recognized through numerous paper awards and fellowships. Nicki was born and raised in Zimbabwe and received a B.Sc. in Computer Science from the University of East Anglia (UK) in 2004, and an M.S. and Ph.D. in Computer Science and Engineering from the University of Washington in 2011 and 2015 respectively.
I will present work that leverages user behavioral data to build personalized applications, which I call "behavior-powered systems". Two applications use online user interactions: 1) WebGazer uses interaction data made on any website to continuously calibrate a webcam-based eye tracker, so that users can manipulate any web page solely by looking. 2) Drafty tracks interactions with a detailed table of computer science professors to ask the crowd of readers to help keep structured data up-to-date by inferring their interests. And two applications use mobile sensing data: 3) SleepCoacher uses smartphone sensors to capture noise and movement data while people sleep to automatically generate recommendations about how to sleep better through a continuous cycle of mini-experiments. 4) Rewind uses passive location tracking on smartphones to recreate a person’s past memory through a fusion of geolocation, street side imagery, and weather data. Together, these systems show how subtle footprints of user behavior collected remotely can reimagine the way we gaze at websites, improve our sleep, experience the past, and maintain changing data.
Jeff Huang is an Assistant Professor in Computer Science at Brown University. His research in human-computer interaction focuses on behavior-powered systems, spanning the domains of mobile devices, personal informatics, and web search. Jeff’s Ph.D. is in Information Science from the University of Washington in Seattle, and his masters and undergraduate degrees are in Computer Science from the University of Illinois at Urbana-Champaign. Before joining Brown, he analyzed search behavior at Microsoft Research, Google, Yahoo, and Bing, and co-founded World Blender, a Techstars-backed company that made geolocation mobile games. Jeff has been a Facebook Fellow and has received a Google Research Award and NSF CAREER Award.
Incredible advances in hardware have not been matched by equivalent advances in software; we remain mired in the graphical user interface of the 1970s. I argue that we need a paradigm shift in how we design, implement and use interactive systems. Classical artificial intelligence treats the human user as a cog in the computer's process -- the so-called “human-in-the-loop”; Classical human-computer interaction focuses on creating and controlling the 'user experience'. We seek a third approach -- a true human-computer partnership, which takes advantage of machine learning, but leaves the user in control. I describe a series of projects that illustrate our approach to making interactive systems discoverable, appropriable and expressive, using the principles of instrumental interaction and reciprocal co-adaptation.
The goal is to create robust interactive systems that significantly augment human capabilities and are actually worth learning over time.
Wendy Mackay is a Research Director, Classe Exceptionnelle, at Inria, France, where she heads the ExSitu (Extreme Situated Interaction) research group in Human-Computer Interaction at the Université Paris-Saclay. After receiving her Ph.D. from MIT, she managed research groups at Digital Equipment and Xerox EuroPARC, which were among the first to explore interactive video and tangible computing. She has been a visiting professor at University of Aarhus and Stanford University and recently served as Vice President for Research at the University of Paris-Sud. Wendy is a member of the ACM CHI academy, is a past chair of ACM/SIGCHI, chaired CHI'13 and received the ACM/SIGCHI Lifetime Acheivement Service Award. She also received the prestigious ERC Advanced Grant for her research on co-adaptive instruments. She has published over 150 peer-reviewed research articles in the area of Human-computer Interaction. Her current research interests include human-computer partnerships, co-adaptive instruments, creativity, mixed reality and interactive paper, and participatory design and research methods.
My group's research in Human-Computer Interaction focuses on design, prototyping and implementation tools for the era of ubiquitous embedded computing and digital fabrication. We focus especially on supporting the growing ranks of amateur designers and engineers in the Maker Movement. Over the past decade, a resurgence in interest how the artifacts in our world are designed, engineered and fabricated has led to new approaches for teaching art and engineering; new methods for creating artifacts for personal use; and new models for launching hardware products. The Maker Movement is enabled by a confluence of new technologies like digital fabrication and a sharing ethos built around online tutorials and open source design files. A crucial missing building block are appropriate design tools that enable Makers to translate their intent into appropriate machine instructions - whether code or 3D prints. Makers’ expertise and work practices differ significantly from those of professional engineers - a reality that design tools have to reflect.
I will present research that enables Makers and designers to rapidly prototype, fabricate and program interactive products. Making headway in this area involves working in both hardware and software. Our group creates new physical fabrication hardware such as augmented power tools and custom CNC machines; new design software to make existing digital fabrication tools more useful; software platforms for the type of connected IoT devices many Makers are creating; and debugging tools for working at the intersection of hardware and software. We also create expertise sharing tools that lower the cost and increase the quality of online tutorials and videos through which knowledge is disseminated in this community.
Our work on these tools is motivated by the daily experience of teaching and building in the Jacobs Institute for Design Innovation - a 24,000 sq ft space for 21st-century design education that opened in 2015. I will give an overview of institute activities and projects, and how they inform our research agenda.
Bjoern Hartmann is an Associate Professor in EECS at UC Berkeley. He is the faculty director of the new Jacobs Institute for Design Innovation. He previously co-founded the CITRIS Invention Lab and also co-directs the Berkeley Institute of Design. His research has received numerous Best Paper Awards at top Human-Computer Interaction conferences, a Sloan Fellowship, an Okawa Research Award and an NSF CAREER Award. He received both the Diane S. McEntyre Award and the Jim and Donna Gray Faculty Award for Excellence in Teaching. He completed his PhD in Computer Science at Stanford University in 2009, and received degrees in Digital Media Design, Communication, and Computer and Information Science from the University of Pennsylvania in 2002. Before academia, he had a previous career as the owner of an independent record label and as a traveling DJ.
What Makes Robots Special? Lessons from Building Robots that Teach
For the past 15 years, I have been building robots that teach social and cognitive skills to children. Typically, we construct these robots to be social partners, to engage individuals with social skills that encourage that person to respond to the robot as a social agent rather than as a mechanical device. Most of the time, interactions with artificial agents (both robots and virtual characters) follow the same rules as interactions with people.
The first part of this talk will focus on how human-robot interactions are uniquely different from both human-agent interactions and human-human interactions. These differences, taken together, provide a case for why robots might be unique tools for learning.
The second part of this talk will describe some of our ongoing work on building robots that teach. In particular, I will describe some of the efforts to use robots to enhance the therapy and diagnosis of autism spectrum disorder.
Brian Scassellati is a Professor of Computer Science, Cognitive Science, and Mechanical Engineering at Yale University and Director of the NSF Expedition on Socially Assistive Robotics. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills.
Dr. Scassellati received his Ph.D. in Computer Science from the Massachusetts Institute of Technology in 2001. His dissertation work (Foundations for a Theory of Mind for a Humanoid Robot) with Rodney Brooks used models drawn from developmental psychology to build a primitive system for allowing robots to understand people. His work at MIT focused mainly on two well-known humanoid robots named Cog and Kismet.
Dr. Scassellati's research in social robotics and assistive robotics has been recognized within the robotics community, the cognitive science community, and the broader scientific community. He was named an Alfred P. Sloan Fellow in 2007 and received an NSF CAREER award in 2003. His work has been awarded five best-paper awards. He was the chairman of the IEEE Autonomous Mental Development Technical Committee from 2006 to 2007, the program chair of the IEEE International Conference on Development and Learning (ICDL) in both 2007 and 2008, and the program chair for the IEEE/ACM International Conference on Human-Robot Interaction (HRI) in 2009.
Measuring Sleep, Stress and Wellbeing with Wearable Sensors and Mobile Phones
Sleep, stress and mental health have been major health issues in
modern society. Poor sleep habits and high stress, as well as
reactions to stressors and sleep habits, can depend on many factors.
Internal factors include personality types and physiological factors
and external factors include behavioral, environmental and social
factors. What if 24/7 rich data from mobile devices could identify
which factors influence your bad sleep or stress problem and provide
personalized early warnings to help you change behaviors, before
sliding from a good to a bad health condition such as depression?
In my talk, I will present a series of studies and systems we have
developed to investigate how to leverage multi-modal data from
mobile/wearable devices to measure, understand and improve mental
Akane Sano is a Research Scientist at MIT Media Lab, Affective
Computing Group. Her research focuses on mobile health and affective
computing. She has been working on measuring and understanding stress,
sleep, mood and performance from ambulatory human long-term data and
designing intervention systems to help people be aware of their
behaviors and improve their health conditions. She completed her PhD
at the MIT Media Lab in 2015. Before she came to MIT, she worked for
Sony Corporation as a researcher and software engineer on wearable
computing, human computer interaction and personal health care. Recent
awards include the AAAI Spring Symposium Best Presentation Award and
MIT Global Fellowship.
This talk explores the physical and cognitive limits of crowds, by following a number of real-world experiments that utilized social media to mobilize the masses in tasks of unprecedented complexity. From finding people in remote cities, to reconstructing shredded documents, the power of crowdsourcing is real, but so are exploitation, sabotage, and hidden biases that undermine the power of crowds.
Iyad Rahwan is the AT&T Career Development Professor and an Associate Professor of Media Arts & Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. He holds a PhD from the University of Melbourne, Australia, and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS).
In this talk, I am going to present and demo our award winning research initiative on creating custom animations - Project Draco. Project Draco was recently released as Sketchbook Motion, and was featured by Apple as "The best iPad app of the year 2016".
With Project Draco, we investigate the question of how we can enable everyone to bring life to otherwise static drawings—how can we make animation as easy as sketching a static image?
Most of us experience the power of animated media every day: animation makes it easy to communicate complex ideas beyond verbal language. However, only few of us have the skills to express ourselves through this medium. By making animation as easy, accessible, and fluid as sketching, I intend to make dynamic drawings a powerful medium to think, create, and communicate rapidly.
Rubaiat Habib is a Sr. Research Scientist, artist, and designer at Autodesk Research. His research interest lies at the intersection of Computer Graphics and HCI for creative thinking, design, and storytelling. Rubaiat received several awards for his work including two ACM CHI Best Paper Nominations, ACM CHI and ACM UIST Peoples’ choice best talk awards, and ACM CHI Golden Mouse awards for best research videos. For his PhD at the National University of Singapore, Rubaiat also received a Microsoft Research Asia PhD fellowship. Rubaiat’s research in dynamic drawings and animation is regularly turned into new products reaching a global audience. As a freelance cartoonist and designer, he contributed to a number of magazines, books, and newspapers.