IBM, Design and Technology: A Conversation with Jon Iwata
Contributor
The Value of Design
Jon Iwata is Executive in Residence at the Yale School of Management, an appointment he received upon his retirement from IBM in 2018, capping a 34-year career at the company. Jon led IBM’s global marketing, communications, and corporate citizenship organization for nearly a decade. His most recent position was IBM Chief Brand Officer and Senior Vice President. IBM’s brand is recognized as one of the most valuable in the world, a status that has been enhanced by the development of strategic brand platforms, which express IBM’s unique strategy, purpose, and values. Jon is the architect of several of these brand platforms, including e-business, Smarter Planet, and Watson.
Liwei Wang As you know, Thomas Watson Jr. had the famous quote: “Good design is good business.” IBM is a company with over 100 years of history. Do you think that design played a big part of its success and helped it adapt throughout the years?
Jon Iwata I think it has helped a lot in terms of differentiating IBM. Except for brief periods of time, we have never been a consumer company. For an industrial or a B2B (business to business) company to embrace design might surprise people. What Thomas Watson Jr. believed led to a golden age of hiring. Architect and designer Eliot Noyes, who was at the Museum of Modern Art at the time, first refused to become an employee of IBM because he thought he would compromise his objectivity. He was hired to be arguably the first director of a corporate design program in business. Noyes had collaborations with Charles and Ray Eames, Eero Saarinen, Paul Rand, and many others. A principle that came out of that era is Unity, Not Uniformity. Corporate design systems are often about establishing standards and enforcing compliance because the goal seems to be uniformity. But unity is a different thought. Unity is about identifying the character and ethos of a company. If you can get the Eameses, Paul Rand, and Saarinen to understand the ethos of the company, then they could express it through architecture and industrial design, through the electric typewriter or the first data centers of mainframes, through massive exhibitions like the World’s Fair in the 1960s, or temporary exhibitions like Mathematica, or through films like Powers of Ten.
The IBM logo appears here and there, but there was unity around the idea of technology as a force for positive progress in the world, and not as something to fear. Those principles are still powerful today.
LW This strategy is in stark contrast to the zeitgeist at the time. The technological aspects of modernism implied that uniformity and standardization was the way of the future. Why do you think that IBM took this different approach?
JI Maybe because we weren’t a consumer company. If you tried to unify everything from large scale industrial computation devices to humble electric typewriters and everything in between, the only thing that could be consistent was the logo. We felt that it was more important to change people’s thinking. That’s where the Eameses were powerful. The celebrated work we did in collaboration with them strove to change people’s thinking about computers. This is relevant even today. Today’s version of it is A.I., which raises lots of issues. I had a lot to do with Watson—not the technology, but in terms of positioning Watson in the world: tactical things like Watson’s voice, name, and appearance. Beyond those things, there are the questions of artificial intelligence and the role of our social intelligence in the world. If the Eameses were alive today I would commission them to create whatever they thought was appropriate—films, pop-up stores, digital experiences, or whatever—so that people understood A.I. and responsibly embraced its potential.
LW I think the popular perception of Watson comes from feats like beating a human at Jeopardy or chess and creative initiatives where Watson created new food recipes. Was this strategic?
JI Yes. Part of it was to demonstrate Watson’s relevance to the everyday. People care about cancer diagnosis, an issue that Watson collaborated on with Memorial Sloan Kettering Hospital, and people care about tax preparation, which Watson collaborated on with H&R Block. But people also care about recipes. We’re always trying to create something a little unique to delight our friends and family. People also love music—there is a kind of mystery as to what constitutes a hit song. The producer Alex da Kid worked with Watson to compose a song, and it did quite well on iTunes. We wanted to show Watson was relevant to a wide range of things that people care about.
LW On the topic of new technology, can you talk little bit about speculative futures, your 100 CEO project and how that came about?
JI This was one of the last projects I worked on at IBM before I retired. I had one-on-one interviews with 100 different CEOs, and they were talking to me about the bets they placed on the future. It was an eye-opening experience because the magnitude of the bets placed said something about this moment. Whether they represented an oil company, a pharma company, a retailer, or a publisher, they were all trying to become the same kind of company. By that, I mean they all wanted to use data as the basis of their competitive advantage. They all were fascinated with digital platforms. They want to be the platform or use other people’s platforms without losing their value and differentiation.
The biggest issue that obstructed their bets was the cultural transformation of their own companies. One hundred percent of them said the biggest risk was their own culture: their ability to change their people, their mindsets, their speed, their risk-taking, and their comfort level with analytics.
LW To sum up your project, do you have any takeaways?
JI There is an en masse redesign of the large corporation underway. It’s pan-industry, pan-geography, and independent of age. I talked to fintechs (financial technology companies) as well as 250-year-old companies from Europe, and all those businesses are moving at the same time. There is a common driver, and they’re all trying to become the same kind of business. I think we’re going through an exciting wave of corporate transformation. Out of that comes a lot of opportunity because the CEOs say their biggest problem is culture change; people who have the right mindset and skill sets will probably do quite well for a while.
LW I’m surprised you mentioned that it’s independent of age. It would seem like younger companies may have an advantage over companies with 250 years of history. Can you elaborate?
JI Sometimes the younger companies are led by older executives. They go through the same stages of being birthed, and they bring in people who can help manage them. The youthfulness of a company, and perhaps its workforce, buckles up against things like scaling or the sudden need to have systems like H.R. and business controls.
Some companies out of Silicon Valley weren’t born much more than 10 years ago. They’re already contemplating if they’re going to have an act two. That first thing that made them a lot of money isn’t going to be the same thing that’s going to make them successful for the next 20 years. You can think of a few large, successful Silicon Valley-based companies that have to fundamentally rethink their business model and their cultures. Sometimes those workforces expect their companies to take stands on social issues, and management has encouraged people to be themselves and to speak out. Those management teams are suddenly saying: actually, you’re not supposed to feel empowered on every issue. The workforce doesn’t like hearing that, so those leadership teams are asking: how do we keep our existing culture but grow up a little bit?
LW I want to shift and ask you about a contemporary phenomenon: Instagram. For me Instagram is fascinating because, firstly, from a design point of view, it has educated a large population about certain things like photography or exposure or color contrast. Secondly, on some Freudian level I believe people are aware that what they project onto Instagram with social media is a different version of themselves. Then all of that gets tied into this idea of data and what companies are pulling from our personalities. What do you think about all that?
JI I’m simultaneously really concerned, and I’m not.
My concern is over the weaponization of information. There is a spectrum here. On one end of the spectrum, we self-medicate based on our likes and preferences. Through the platforms that we depend on for news and information, we select things that we are interested in, and the algorithms reward that. Our behavior reinforces our worldviews, interests, and beliefs—not just politics. It encompasses music and cooking and cars and sports teams. When we get this reinforcement, we lose exposure to a diversity of things. And again, that includes political things, but I’m equally concerned about art and music and other things happening in the world.
Instagram makes a lot of money because a lot of companies want to know what we’re interested in, and they monetize that. I’m not an alarmist over that because I think you know if the product is free, you’re the product.
Where I get really concerned is when people’s perceptions and beliefs are manipulated. Deepfake, the technology that fabricates videos depicting things which never happened in reality, is becoming really sophisticated. You combine that with this personalized platform phenomenon… I think we’re in a world that we just don’t fully understand. In a way, I welcome this world—there are many of advantages to it—but there are a lot of unanswered questions, and a lot of responsible people have to step up.
LW I think we’re really seeing the cracks in some of these platforms—and I don’t want to make too much of an inference—but the recent New Zealand shooting, live-streamed on Facebook, was a prime example of that. What role do you think ethics plays in technology companies with platforms? How can we start to build ethical behaviors and moral or philosophical questions into the platforms that almost everybody is using these days?
JI It’s ethics, and it’s other things, too. For example, A.I. and the platforms are going to become the same thing. Explainability is a really big issue now. It’s not sufficient for the A.I. in a platform or in a device to make a recommendation or a decision if it can’t tell me how it came up with that recommendation or decision. It’s not a trivial thing because these A.I. models are becoming so sophisticated that we have to design it into the A.I. itself. The A.I. has to explicitly say: I recognize all of those pictures of cats because they all have tails and fur and whiskers. It’s a simple example, but if you don’t build explainability into the system, then it’s a black box. People won’t trust it, nor should they.
A.I. bias doesn’t mean that people are deliberately putting their biases into this system. It simply means that bias happens. There are a lot of classic examples where facial recognition has very high accuracy when recognizing white men and low accuracy when recognizing non-white men. I’d say that’s bias. It’s not because there are people in there who only want certain things recognized… white men maybe. But the real issue is we need more images of non-white men that are properly tagged, so A.I. systems can train on more than just white men.
The trolley dilemma, which is a beautiful, classic, philosophical imponderable, is real. An autonomous vehicle that suddenly sees through sensors and cameras in real time is either going to continue to go straight and kill five people, or turn a different way and kill you. It is going to make that choice. The trolley dilemma is a dilemma because there isn’t a right or wrong answer. Whatever the answer is, you want a human and not a machine to decide it.
LW Do you think it’s about accountability? I keep thinking back to Facebook. It’s so easy for Facebook to say they’re all about free speech. But we’ve seen issues again and again, especially in areas where English is not the language. Where Facebook does not really understand the cultural landscape that well, it has become a weaponized tool. Who is really accountable for this, and do you think that somebody should step up?
JI For platforms like Facebook, there is the threat of regulation. They’re not going to like that because it impinges upon what they can do, and they argue it will stifle innovation and so forth. On the other hand, it seems that society is saying that this is not acceptable. It is interesting that Facebook never said that the Russians hacked them. Rather, the platform was used for purposes that were not good. But the platform exists to be used in certain ways. To me, that says it can evolve so as to prevent people from doing things that you don’t want want them to. The controversy is you can’t claim “we’re not a content company, we’re just a platform or technology company” and yet make all of your money from monetizing content and the people who interact with your content. But it isn’t limited to Facebook. Every company is going to use A.I., therefore every company is going to have to grapple with these same issues. In the end, transparency (explainability, bias, as well as knowing when I’m interacting with an A.I. system) is going to end up being really important. Remember when Google put out that recording of one of its A.I. systems calling restaurants? Some people thought: _hey that’s really cool, a human being thought they were actually taking a reservation from another human bein_g. And the other half said: this is outrageous, you’ve fooled people. So transparency in lots of dimensions is necessary: the ability to control, the ability to opt out, the ability to turn things off.
LW As a final point, I’d like to ask: if you had to design a company from scratch right now, what elements would you put into it? What do you think is relevant for a company to survive today?
JI Based on my experience at IBM, the thing that I would really obsess about is a culture of learning—constantly understanding what is happening around us and asking what it means. You need to have critical thinkers. I think strategic thinking is overrated and critical thinking is undervalued. I would find people who really debate over what a sonnet means, or what a movie really meant. Why? Because they’re going to look at something and through the layers looking for a deeper meaning. When a new competitor comes along, or consumers change their wants, or a new technology emerges, the team is not going to immediately say: well, that’s what it is. Often, that’s not what it is.
The other thing I would be is asset-light. The assumption will be that we don’t have to own something, only the right to use it. I think that will maximize financials and have the greatest flexibility. I’m always leery when companies build massive new headquarter buildings because that seems to represent the “peak” of something.
LW That last point is not very good news for architects, and it brings this idea of a tension between sunk capital—such as built headquarters, adaptability, and change.
JI When I had dinner with the WeWork folks recently, they were talking about how corporate cultures and their needs are changing. At some point WeWork will want to move from just being a provider of space at a good price to being a provider of culture. WeWork could ask what kind of culture companies desire, and then build and design for that. For companies that want to accelerate cultural transformation—to go not just to a different building, but to a new environment with all the elements that drive culture—that’s a very compelling value proposition.
A lot of companies like WeWork because it’s like Silicon Valley, and there is free coffee and massages and stuff. What WeWork would say is: we want to do more than just provide a pleasant environment; we want to be a place where work continues to change.
I don’t think asset-light means the demise of great architecture. I just think that it is a different challenge. Architects can do more work than simply create a wonderful building, they can help companies design for the right culture.