“The AI Playbook: Mastering the Rare Art of Machine Learning Deployment” with Eric Siegel

The AI Playbook featured on Bridging the Gaps

The most powerful tool often comes with the greatest challenges. In recent times Machine learning has emerged as the world’s leading general-purpose technology, yet its implementation remains notably complex. Beyond the realm of Big Tech and a select few leading enterprises, many machine learning initiatives don’t succeed, failing to deliver on their potential. What’s lacking? A specialised business approach and development & deployment strategy tailored for widespread adoption. In his recent book “The AI Playbook: Mastering the Rare Art of Machine Learning Deployment” acclaimed author Eric Siegel introduces a comprehensive six-step methodology for guiding machine learning projects from inception to implementation. The book showcases the methodology through both successful and unsuccessful anecdotes, featuring insightful case studies from renowned companies such as UPS, FICO, and prominent dot-coms. In this episode of Bridging the Gaps, I speak with Eric Siege. We discuss this disciplined approach that empowers business professionals, and establishes a sorely needed strategic framework for data professionals.

Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series and its new sister, Generative AI World, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery,” executive editor of The Machine Learning Times, and a frequent keynote speaker.

We begin our discussion by addressing Eric’s notable observation, highlighted both in his presentations and book, that the “AI Hype” is a distraction for companies. Eric elaborates on this notion, providing detailed insights. Additionally, we explore the suggestion to shift focus from the broad term “AI” to the more specific “Machine Learning.” Our conversation then delves into the challenges faced by companies and professionals in conceptualising and deploying AI-driven ideas and solutions. This then leads to the consideration of whether forming specialised teams and developing focused strategies are necessary to address these challenges effectively. Next, we delve into the intricacies of the six-step BizML process introduced by Eric in his book, comparing it to the concept of MLOps. We then thoroughly examine the BizML process, dissecting its components and implications. Overall, this has been a highly enlightening and informative discussion.

Complement this discussion with “Working with AI: Real Stories of Human-Machine Collaboration” with Professor Thomas Davenport and Professor Steven Miller and then listen to “Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman

By |February 11th, 2024|Artificial Intelligence, Computer Science, Podcasts, Technology|

“The Smartness Mandate” with Professor Orit Halpern

Smartness Mandate on Bridging the Gaps podcast

Smartness has permeated our lives in the form of smartphones, smart cars, smart homes, and smart cities. It has become a mandate, a pervasive force that governs politics, economics, and the environment. As our world faces increasingly complex challenges, the drive for ubiquitous computing raises important questions. What exactly is this ‘smartness mandate’? How did it emerge, and what does it reveal about our evolving understanding and management of reality? How did we come to view the planet and its inhabitants primarily as instruments for data collection?

In the book ‘The Smartness Mandate,’ co-authored by Professor Orit Halpern, the notion of ‘smartness’ is presented as more than just a technology, it is presented as an epistemology — a way of knowing. In this episode of Bridging the Gaps, I speak with Professor Orit Halpern, where we delve into the concept of smartness. We explore its historical roots and its cultural implications, particularly its emphasis on data-driven technologies and decision-making processes across domains such as urban planning, healthcare, and education.

Orit Halpern is Lighthouse Professor and Chair of Digital Cultures and Societal Change at Technische Universität Dresden. She completed her Ph.D. at Harvard. She has held numerous visiting scholar positions including at the Max Planck Institute for the History of Science in Berlin, IKKM Weimar, and at Duke University. At present she is working on two projects. The first project is about the history of automation, intelligence, and freedom; and the second project examines extreme infrastructures and the history of experimentation at planetary scales in design, science, and engineering.

Our conversation begins by discussing the idea of “smartness” as presented in the book. To understand it better, we look at a few examples. The book suggests that the smartness paradigm relies a lot on collecting data, analysing it, as well as monitoring people through surveillance. We talk about the possible risks and consequences of this data-focused approach for personal privacy and individual rights. Next, we talk about how the smartness idea connects with the concept of resilience. We also touch on the fact, as presented in the book, that the smartness paradigm often reinforces existing power structures and inequalities. We explore the biases and ethical concerns that may arise with the use of these technologies. Furthermore, we explore the possibility of using the smartness approach to promote fairness and equality. We talk about how it could be applied to create a more just society. We discuss the significance of multidisciplinarity, and the role of higher education institutions and educators to create an enabling environment for an informed discourse to address these questions. Professor Orit Halpren emphasises the importance of exploring these questions and addressing relevant concerns to make sure we create the kind of world we truly want for ourselves.

Complement this discussion with Cloud Empires: Governing State-like Digital Platforms and Regaining Control with Professor Vili Lehdonvirta and the listen to Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

By |June 6th, 2023|Computer Science, Future, Information, Knowledge, Technology|

Reclaiming Human Intelligence and “How to Stay Smart in a Smart World” with Prof. Gerd Gigerenzer

The future of technology is a subject of debate among experts. Some predict a bleak future where robots become dominant, leaving humans behind. Others, known as tech industry boosters, believe that replacing humans with software can lead to a better world. Critics of the tech industry express concern about the negative consequences of surveillance capitalism. Despite these differences, there is a shared belief that machines will eventually surpass humans in most areas. In his recent book “How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms” professor Gerd Gigerenzer argues against this notion and offers insights on how we can maintain control in a world where algorithms are prevalent. In this episode of Bridging the Gaps, I speak with professor Gerd Gigerenzer to discuss challenges posed by rapid developments in the tech sector, particularly in the field of artificial intelligence. We discuss different approaches that individuals can adopt to enhance their awareness of the potential hazards that come with using such systems and explore strategies to maintain control in a world where algorithms play a significant role.

Gerd Gigerenzer is a psychologist and researcher who has made significant contributions to the fields of cognitive psychology and decision-making. He is director emeritus at the Max Planck Institute for Human Development, and is director of the Harding Center for Risk Literacy at the University of Potsdam. He is a professor of psychology at the University of Chicago and is a visiting professor at the University of Virginia. His research focuses on how people make decisions under conditions of uncertainty and how to improve people’s understanding of risk and probability. He has trained judges, physicians, and managers in decision-making and understanding risk.

Our discussion begins by exploring the limitations of present-day narrow and task-specific artificial intelligence systems in dealing with complex scenarios. Professor Gerd Gigerenzer’s argument that simple heuristics may outperform complex algorithms in solving complex problems is particularly noteworthy. In fact, in some complex scenarios, relying on our intuition or “gut feelings” may result in better decisions than relying on sophisticated technological systems. We then discuss the importance of assessing the risks associated with using seemingly free services that actually collect and exploit users’ data and information to sustain their business models. We delve into the topic of recommender systems that subtly influence users’ choices by nudging them towards certain features, services, or information. Next, we examine various strategies for individuals to become more mindful of the potential risks associated with using such systems, and consider ways to maintain control in a world where algorithms wield considerable influence. This has been an insightful discussion.

Complement this discussion with “Machines like Us: TOWARD AI WITH COMMON SENSE” with Professor Ronald Brachman and then listen to “Philosophy of Technology” with Professor Peter-Paul Verbeek”

By |April 1st, 2023|Artificial Intelligence, Future, Technology|