“The Smartness Mandate” with Professor Orit Halpern

Smartness Mandate on Bridging the Gaps podcast

Smartness has permeated our lives in the form of smartphones, smart cars, smart homes, and smart cities. It has become a mandate, a pervasive force that governs politics, economics, and the environment. As our world faces increasingly complex challenges, the drive for ubiquitous computing raises important questions. What exactly is this ‘smartness mandate’? How did it emerge, and what does it reveal about our evolving understanding and management of reality? How did we come to view the planet and its inhabitants primarily as instruments for data collection?

In the book ‘The Smartness Mandate,’ co-authored by Professor Orit Halpern, the notion of ‘smartness’ is presented as more than just a technology, it is presented as an epistemology — a way of knowing. In this episode of Bridging the Gaps, I speak with Professor Orit Halpern, where we delve into the concept of smartness. We explore its historical roots and its cultural implications, particularly its emphasis on data-driven technologies and decision-making processes across domains such as urban planning, healthcare, and education.

Orit Halpern is Lighthouse Professor and Chair of Digital Cultures and Societal Change at Technische Universität Dresden. She completed her Ph.D. at Harvard. She has held numerous visiting scholar positions including at the Max Planck Institute for the History of Science in Berlin, IKKM Weimar, and at Duke University. At present she is working on two projects. The first project is about the history of automation, intelligence, and freedom; and the second project examines extreme infrastructures and the history of experimentation at planetary scales in design, science, and engineering.

Our conversation begins by discussing the idea of “smartness” as presented in the book. To understand it better, we look at a few examples. The book suggests that the smartness paradigm relies a lot on collecting data, analysing it, as well as monitoring people through surveillance. We talk about the possible risks and consequences of this data-focused approach for personal privacy and individual rights. Next, we talk about how the smartness idea connects with the concept of resilience. We also touch on the fact, as presented in the book, that the smartness paradigm often reinforces existing power structures and inequalities. We explore the biases and ethical concerns that may arise with the use of these technologies. Furthermore, we explore the possibility of using the smartness approach to promote fairness and equality. We talk about how it could be applied to create a more just society. We discuss the significance of multidisciplinarity, and the role of higher education institutions and educators to create an enabling environment for an informed discourse to address these questions. Professor Orit Halpren emphasises the importance of exploring these questions and addressing relevant concerns to make sure we create the kind of world we truly want for ourselves.

Complement this discussion with Cloud Empires: Governing State-like Digital Platforms and Regaining Control with Professor Vili Lehdonvirta and the listen to Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

By |June 6th, 2023|Computer Science, Future, Information, Knowledge, Technology|

Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

The future of technology is a subject of debate among experts. Some predict a bleak future where robots become dominant, leaving humans behind. Others, known as tech industry boosters, believe that replacing humans with software can lead to a better world. Critics of the tech industry express concern about the negative consequences of surveillance capitalism. Despite these differences, there is a shared belief that machines will eventually surpass humans in most areas. In his recent book “How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms” professor Gerd Gigerenzer argues against this notion and offers insights on how we can maintain control in a world where algorithms are prevalent. In this episode of Bridging the Gaps, I speak with professor Gerd Gigerenzer to discuss challenges posed by rapid developments in the tech sector, particularly in the field of artificial intelligence. We discuss different approaches that individuals can adopt to enhance their awareness of the potential hazards that come with using such systems and explore strategies to maintain control in a world where algorithms play a significant role.

Gerd Gigerenzer is a psychologist and researcher who has made significant contributions to the fields of cognitive psychology and decision-making. He is director emeritus at the Max Planck Institute for Human Development, and is director of the Harding Center for Risk Literacy at the University of Potsdam. He is a professor of psychology at the University of Chicago and is a visiting professor at the University of Virginia. His research focuses on how people make decisions under conditions of uncertainty and how to improve people’s understanding of risk and probability. He has trained judges, physicians, and managers in decision-making and understanding risk.

Our discussion begins by exploring the limitations of present-day narrow and task-specific artificial intelligence systems in dealing with complex scenarios. Professor Gerd Gigerenzer’s argument that simple heuristics may outperform complex algorithms in solving complex problems is particularly noteworthy. In fact, in some complex scenarios, relying on our intuition or “gut feelings” may result in better decisions than relying on sophisticated technological systems. We then discuss the importance of assessing the risks associated with using seemingly free services that actually collect and exploit users’ data and information to sustain their business models. We delve into the topic of recommender systems that subtly influence users’ choices by nudging them towards certain features, services, or information. Next, we examine various strategies for individuals to become more mindful of the potential risks associated with using such systems, and consider ways to maintain control in a world where algorithms wield considerable influence. This has been an insightful discussion.

Complement this discussion with “Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman and then listen to “Philosophy of Technology” with Professor Peter-Paul Verbeek”

By |April 1st, 2023|Artificial Intelligence, Future, Technology|

“Working with AI: Real Stories of Human-Machine Collaboration” with Professor Thomas Davenport and Professor Steven Miller

Working with AI Reviewed at Bridging the Gaps

There is a widespread view that artificial intelligence is a job destroyer technical endeavour. There is both enthusiasm and doom around automation and the use of artificial intelligence-enabled “smart” solutions at work. In their latest book “Working with AI: Real Stories of Human-Machine Collaboration”, management and technology experts professor Thomas Davenport and professor Steven Miller explain that AI is not primarily a job destroyer, despite popular predictions, prescriptions, and condemnation. Rather, AI alters the way we work by automating specific tasks but not entire careers, and thus freeing people to do more important and difficult work. In the book, they demonstrate that AI in the workplace is not the stuff of science fiction; it is currently happening to many businesses and workers. They provide extensive, real-world case studies of AI-augmented occupations in contexts ranging from finance to the manufacturing floor.

In this episode of Bridging the Gaps I speak with professor Thomas Davenport and professor Steven Miller to discuss their fascinating research, and to talk through various case studies and real work use cases that they outline in the book. We discuss the impact of Artificial intelligence technologies on the job market and on the future of work. We also discuss future hybrid working environments where AI and Humans will work side by side.

Professor Thomas Davenport is a Distinguished Professor of Information Technology and Management at Babson College, a visiting professor at the Oxford University and a Fellow of the MIT Initiative on the Digital Economy. Steven Miller is Professor Emeritus of Information Systems at Singapore Management University.

We begin our discussion by looking at various aspects of the environments where AI and human workers work side by side, and then discuss the concept of Hybrid Intelligence. Then we talk about the challenges that organisations are faced with while developing and implementing Artificial Intelligence enabled technologies and solutions in enterprise environments. An important question that I raise during our discussion is, are the organisations ready for large scale deployment of AI solutions. The book is full of real world case studies and covers a wide variety of use cases. We delve into a number of these real world case studies and use cases. This has been a very informative discussion.

Complement this discussion with “The Technology Trap” and the Future of Work” with Dr Carl Frey and then listen to “Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman

By |October 31st, 2022|Artificial Intelligence, Computer Science, Future, Podcasts, Technology|