Privacy and Ethics of AI has been a stated pillar of the Media Innovation Center’s focus since its foundation in 2014, and this topic informs the center’s mission, programming, projects, events and curriculum, from our work in IoT in our Maker Lab and Bot Studio, Augmented Reality in our Mixed Reality studio, BCI Lab, our multi-institution Social Justice Reporting initiative, and our research in misinformation, disinformation, and platform accountability in algorithmic amplification of fake news and online extremism. This includes:
Hands-On Machine Learning Solutions for Journalists
We’re piloting curriculum collaboration with Quartz’ Bot Studio John Keefe: Hands-On Machine Learning Solutions for Journalists, led by Professor Bob Britten. Machine learning is becoming an essential and powerful tool for journalism. In this curriculum, students learn to building and train a model that can recognize a particular structure or form (e.g., faces that are smiling), and using that model to sort, classify and seek out documents that it's learned to identify. Because models can be trained to recognize recurring structures in text, video and images, journalists can use machine learning to categorize information and seek out trends far more quickly and accurately than humans can do on their own. Practical applications include identifying photos that depict human faces of a certain kind, or seeking through a set of documents to tag all names, or sorting a set of maps into those that show certain paths and those that do not. The adaptability of the form and its ability to focus on structures within virtually limitless files makes it a useful tool for upper-level media students trying to identify themes and trends in the subjects they cover. Professor Dana Coester is also working with John Keefe to develop and train machine learning for a digital forensic investigative project.
Online AI Ethics Course
We developed in 2017 and piloted in 2018 the following online course, available for students across campus and for continuing education for targeted industry professionals. This course also serves as a foundational course in our Masters in Media Innovation program.
Ethics in an AI Society: Course Description From SIRI and Alexa to search engines to self-driving cars, Artificial Intelligence (AI) is rapidly transforming society. It informs the mobile apps, websites and the IoT objects of everyday life and increasingly serves as the key mechanism driving interactivity, participation and decision making in support of many human needs. This online course helps students and professionals tackle real-world problems across diverse industries to better understand what AI is, explore the values and ethical norms of AI, and discuss how these important topics relate to live, learn, work and our role in society.
- Taplin, J. (2017). Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy Little, Brown and Company.
- Webb, A. (2018) The Big Nine: How the Tech Titans and Their Thinking Machines Could
Warp Humanity, Perseus Books
Of special note - the author of The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity is futurist Amy Webb, a world-renowned expert on Artificial Intelligence, who is also a member of the College’s Media Innovation Center Advisory Committee and close collaborator with MIC Creative Director Dana Coester.
AI Capstone Course
In 2017 the College of Media collaborated with the Lane Department of Computer Science and Electrical Engineering to teach a capstone course in AI at the Media Innovation Center. Students in the senior-level computer science course worked in teams to develop and implement their own AI programs designed to address the issue of misinformation, disinformation and fake news. This collaboration with the computer science course serves as an example of the Media Innovation Center’s leadership in initiatives, projects, research and curriculum innovations at the intersection of technology, media and information networks.
In 2017 the Media Innovation Center hosted the 3rd in a series of signature hackathon events designed to tackle challenges and emerging opportunities in technology markets. Hacking the Gender Gap: Diversifying AI was a three-day immersion event featuring women at the forefront of tackling AI’s diversity problem. Guest speakers included Cyber-bullying expert Michelle Ferrier, founder of TrollBusters; human rights attorney and social entrepreneur Flynn Coleman who writes about humanity’s future in AI; and Susan Etlinger, a global expert in AI, data and digital ethics who works with the technology futurist company Altimeter, and Erin Reilly, Director of Innovation and Entrepreneurship for Moody College at UT-Austin. The hackathon addressed some of AI’s risks, including emerging threats to privacy and the danger of replicating existing institutionalized biases in AI, as well as explored the opportunities to create a more diverse AI that can benefit communities around the world.
Curriculum, projects and research in IoT, including machine learning, voice interface and other AI applications in media led by College of Media faculty Bob Britten, who is also leads the Maker Lab and Bot Studio for the Media Innovation Center.
Research in misinformation, disinformation, and platform accountability in algorithmic amplification of online extremism by Professor Dana Coester and Professor Joel Beeson, which includes funding from the Ford Foundation, The Democracy Fund and collaborations with researchers at Data and Society (a research institute focused on focused on the social and cultural issues arising from data-centric and automated technologies) and the Harvard Shorenstein Center on Media, Politics and Public Policy, and the American Press Institute’s research on the role of algorithms in misinformation and polarization in America.