“The AI Dilemma”: A Review Of Key Issues Facing Management And Society


The latest book sent for me to review was a good read. “The AI Dilemma,” by Juliette Powell and Art Kleiner, is subtitled “7 Principles for Responsible Technology.” It’s good to two important reasons. First, it makes sense. Second, it’s rather short. The combination makes it an excellent book for busy managers and politicians who want an introduction to the concept that can decide if adoption of artificial intelligence (AI) will help or harm our society.

The book begins with a discussion of four different logics of power. They do that through the ubiquitous four cell grid. With institutional v individual on one axis and private v public on the other, the authors describe the logics that apply to engineering, social justice, corporate, and government logics. The explanation for each stakeholder group is short and clear.

While they don’t focus as much on disappearing jobs as I feel there should be, their talk about keeping in mind risks to humans follows. In this chapter and the rest, they do bring their ideas back to those logics, providing something for all the groups to take away and to help each understand a bit more of the other parts of that matrix.

The book then switches to another topic dear to me, the black box. First, they give a good explanation of why they’re calling it a closed box, and I’m good with that term. The lack of explainability is a serious risk in all systems, but is increasingly important to AI adoption. They mention the different types of explainability that must be considered, from how the code works (yes, there’s code. It’s not magic) to being able to understand results in a way that helps non-technical people gain trust in systems.

The middle two chapters deal with data rights and the biases in systems. They strongly overlap. While the authors use good examples throughout the book, the ones here really help the non-technical people understand why continuing to allow companies to use and abuse our information is not a good thing and why regulations to ensure that proper data sets are used to minimize bias are needed. I do like a suggestion I’ve seen before, that people should own their own data, and that includes getting paid for its use.

While the rest of the book can be generalized to any technology or business but are focused on AI, the last three chapters are truly more generic. They focus on stakeholder accountability, and explanation of why loosely coupled systems work better, and a discussion of creative friction. These are areas all four power groups should understand much better. While programmers already do at the technical level for loosely coupled systems, it’s also important for processes and organizational structure.

This is a commute book. It’s easy to read, clear and concise. It will help any reader who is not already an expert in responsible AI gain a solid understanding of the issue. I heartily recommend it.


Leave a Reply

Your email address will not be published. Required fields are marked *