Arin Bhowmick

IBM’s Arin Bhowmick on designing ethical AI

Design has always been a big part of IBM, but its impact has never been bigger. And like they say, with great power comes great responsibility.

When we think about design at IBM, we probably remember Paul Rand’s iconic Eye Bee M poster or the famous motto “good design is good business”. But while the spirit is still the same, IBM has come a long way. Since Thomas Watson Jr. took over the company in 1952 and decided to make his mark through modern design, they’ve become the single largest design organization in the world, with over 1500 designers working in innovative products from machine learning to cloud to file sharing.

And that’s where Arin Bhowmick comes in. With a Master’s degree in human-computer interaction and over two decades of experience in user research and user experience in companies like Oracle, he now leads the design team across all product offerings at IBM.

Being responsible for the design of AI services used by millions of people is a huge responsibility, and Arin is deeply aware of that. Over the years, he’s become one of the biggest advocates for transparent design. According to him, good design doesn’t need to sacrifice transparency, and imperceptible AI isn’t ethical AI. In fact, he and his team even created a set of guidelines called “Everyday ethics for AI” to help designers and developers create systems that are trustworthy.

In this episode of Inside Intercom, Fergal Reid, our own Director of Machine Learning, sat down with Arin to talk about the role design plays at IBM, the importance of user research, and the principles of building ethical, sustainable AI.

If you’re short on time, here are a few quick takeaways:

  • Good design is good business. Every designer should ask themselves – how can I apply the tenets of good design to drive better customer and business outcomes?
  • User research is a vital part of the design process. By constantly observing the customer, you find out how they define success in their organization, and therefore, how you can help them achieve it.
  • Playfulness pays off. Sometimes, getting honest feedback from customers works best in a relaxed, fun environment, where they can let their guard down. Arin has even successfully conducted customer workshops with Lego bricks.
  • Prototype it til you make it. The design team at IBM likes to employ a “make to learn” method. For them, everything is a prototype, and prototypes are the perfect starting point for getting to the real problems that need to be solved.
  • Transparency builds trust. For Arin, explainability is a key element when it comes to ethical AI. It’s easier to build trust when you open the black box – show the quality, the source and diversity of the data, and the rationale of the model.
  • Factor in future costs. AI keeps learning and morphing well after deployment, so you need to take into account all the costs associated with maintaining the AI. If you’re not ready to put in the time, you’re better off not building it in the first place.

If you enjoy our discussion, check out more episodes of our podcast. You can subscribe on iTunes, on Spotify, or grab the RSS feed in your player of choice. What follows is a lightly edited transcript of the episode.


Innovating through design

Fergal Reid: Arin, thank you very much for coming here today. We’re really delighted for you to take your time to come to our show. To just start things off, can you tell us a little bit about yourself and your background and what drew you to IBM?

Arin Bhowmick: Thank you for having me in the show. I am a user experience practitioner from the day I started thinking about life. By that I mean, from an academic standpoint, I did my bachelor’s in industrial engineering, where I learned a little bit about the mechanics of producing things. Then I graduated into the master’s program in human-computer interaction where it was really about human-machine relationships and the art and science user of experience.

From there on, I started my career in Oracle. I spent around 13 years or so there and got a chance to work on a gamut of products, all the way from really technology-oriented products and users, like databases and application servers, to the other end of the spectrum, like cell service apps for salespeople to put in their contact information. That drew me to the power of design and how design can really help solve problems regardless of the domain.

Beyond that, I joined a smaller company called Progress Software, where I set up a user experience practice and they were getting into the space of platform-as-a-service and data integration. What drew me to IBM was to be this part of the infrastructure and help and influence the transformation of this 100-year-old company.

“I think of myself as a designer, as someone who can help solve a problem or generate a business outcome”

The heritage of IBM, we all know, it’s a company known for innovation, from punch cards to electric typewriters to mainframes to PCs, and now in software, whether you think about cloud, security, blockchain, quantum computing, AI. The whole essence of this, I was a part of the journey to lead with design-based differentiation, and that was a huge draw.

Fergal: Cool. You said you started off in industrial engineering and human-computer interaction. Do you think of yourself more as a designer or an HCI expert, or a user researcher? Can you help me understand, when you look at it holistically, what you think of yourself as now?

Arin: I think of myself as a designer, as someone who can help solve a problem or generate a business outcome. In doing so, we’re going to use sort of different practices and competencies to help, like user research or system thinking or conversational elements, things like that.

Fergal: Awesome. Can you tell me about your role with IBM? I know IBM is a large organization, and honestly, from the perspective of a company like Intercom, just thinking about the scale and everything that goes on there is almost daunting from the outside. Can you tell us a little bit about what you do, what your role is, and where you sit in IBM?

Arin: I am the chief design officer and design executive for IBM products, mainly on the software side, I would say. I play three different hats if I can call it that. As a chief design officer, I’m responsible for the visioning and the design concepts and connecting the dots between these technologies to build products that users love.

“I’m responsible for ensuring that design has an equal seat at the table”

As a design executive, I’m responsible for ensuring that we build products that generate business outcomes that are in line with the mission statement of the company, and design has an equal seat at the table. Finally, as a business executive, in my own way, I’m trying to help IBM be a dominant player in the world of hybrid cloud and AI.

Fergal: Cool. That sounds like an amazing amount of leverage. It sounds like a huge span of influence. Do you achieve this doing session vision or do you run several orgs? How do you achieve that leverage?

Arin: From an organizational standpoint, it’s a centralized organization with all the designers tied to the different missions of the different business units and portfolios. They all report up together. We share design systems, design principles, the way we work across the board. That’s the practice we have.

Good design is good business

Fergal: Got it. I’d love to learn a little bit about design thinking at IBM. We have a quote here from one of the, I guess, key people in IBM in the ’50s, Watson Jr., who famously said that good design is good business. I’d love to hear a little bit about what does this mean? Does this still hold true? How do you operationalize that? Is that just a nice soundbite or is that something that operationally affects the business of IBM?

Arin: Let me just take a step back and let’s give my perspective on what we mean by good design, and then we will get to good business. Good design in enterprise is really about enabling any user to get their job done in a system or a set of products without any friction. Good design is about increasing user engagement and their trust in the product and the system they’re working with. Good design is also about ethical design, so doing no harm, avoiding dark patterns. Good design perhaps can also be solving real problems and with real user pain points.

“Good design as a principle is really designing for the essentials versus the superficials. That’s good design”

My personal perspective on good design as a principle is really designing for the essentials versus the superficials. That’s good design. Good business is, how can we apply the tenets of good design to drive better business and better user outcomes and customer interactions?

If we have a great portfolio of products that serve user needs and user experience becomes a value proposition, then it improves the return on investment, reduces things like support costs, increases brand loyalty, increases the number of users who are willing to try out a new product, which eventually helps user adoption and engagement. There are direct and indirect relationships of design to business. I think that was the essence of the quote.

Fergal: Can there be tension there? I really like the way you set that up, which is that you’re trying to achieve a business objective, you’re trying to solve the user’s problem, and that to that extent, good design is good business. Is there a tension there sometimes where maybe the user wants something cheaper or faster, but then we feel that we want to design it more? Can things be over-designed? Can they be under-designed? Is there ever any tension, or would you say that these two things should always be completely aligned?

“We believe in reinvention. We don’t get married to our designs and get too stuck up with that. We keep iterating”

Arin: In an ideal world, yes, it should be completely aligned, but I will be amiss to say that there isn’t tension. Maybe that tension is a good thing. I will give you a little bit of a perspective on how we deal with this tension. You asked a little bit about how we practice design thinking at IBM, and it’s somewhat tied to this perspective.

In IBM, we’ve taken the design thinking principles out there – and there are many – and applied them to enterprise, where scale is a problem. We have three components to work with, but the main part is the users. Customers are our north star.

First, we have found a principle that helps guide us, and that includes the fact that we believe in reinvention. We don’t get married to our designs and get too stuck up with that. We keep iterating. We intentionally create teams that are diverse so that we have different perspectives on how the design needs to come through. We have this concept of a loop that drives us where we observe the customers, the pain points, the mental models, and so forth, we reflect, and then we make, and we do this over and over again.

Finally, we have this concept of “hills”. These are like mission statements that get alignment between the product manager and the business side, the feasibility aspect of it, the go-to-market part of it, the user needs part of it, and into a benefit statement that has a clear and tangible differentiation built-in. This alignment of missions through the hills framework is what helps us reduce this tension, because I feel a lot of our challenge in bringing good design and good products has to do with alignment.

Fergal: Are those iterative processes that you just keep on revisiting again and again and again as you build the product and bring it to market?

Arin: It’s definitely iterative. Depending on different projects or products, some are organic, so we start from the ground up. And in those cases, we have the luxury of defining an alignment from day one. But we also have products that have been in the market for a long time, and our users have evolved, so we need to continuously iterate towards those goals.

Doubling down on user research

Fergal: I understand you have a team of user research professionals within your design org. I’d love to understand a little bit about how their input informs your process and your product strategy. How do you start? You mentioned that AI products have specific risks in terms of user trust that makes it really important to get users in there early. Do you use your research team to try and mitigate those risks? Can you tell us about doing research for AI products?

Arin: Yeah, absolutely. Our user researchers, at a high level, partner with product designers and product managers to produce data-driven insights as well as intuition-based insights that we get directly from users to influence product planning and development. By doing user research, as a profession, you can get different kinds of data. You can run generative studies, you can run evaluative studies, all the way from structured usable data to very unstructured diary studies. You take all of this information and feed it into the funnel that helps inform the product.

“User research actually helps uncover what I’d call the five whys and two Hs. Who, what, where, when, and why, how, and how much”

From a value proposition of user research, I feel like user research actually helps uncover what I’d call the five whys and two Hs. Who, what, where, when, and why, how, and how much. These are all variables that are important to decide where to take the product or the strategy. In terms of strategy, we try to marry the objectives towards the division or the product-level OKR. Some things we do include are things like identifying unmet or latent needs in existing or new markets or, say, model key ecosystems and market segments. We help aid prioritization of roadmaps and milestones, as I said before.

We also look at the competitive landscape and see how they’re responding to market opportunities with experience. We do things like brand equity, evaluation, and user perception, because perception, in this day and age, goes a long way in user adoption. We also identify factors that influence purchasing adoption. Especially in enterprise software, the people who end up evaluating the product may not be the ones who are buying the products, so we expand our reach into different kinds of personas.

“We define the experience outcomes based on how our customers define success, not how we do it”

Then, we observe how installations are performed with users. This is the bread and butter part. As we design our products, we need to ensure that we evaluate them and we define the experience outcomes based on how our customers define success, not how we do it, how our customers define success with our offerings.

We also try to keep an eye out: can we benchmark our experiences? Can we measure where we are better or worse than our competitors or expectations? A lot happening there, and we have a lot of fun, too, by the way.

A little story. We were getting into a newer market around data ops, so we pulled together a customer workshop with over 30 customers. We did our process and framework and there were 14 or 15 focus areas, but we wanted to scale it up, and we wanted to expand the problem segment to figure out where our customers are in their AI maturity cycle because as we know, AI as a technology is still up and coming in terms of adoption.

“We can build better products if we can understand user expectations and psychological elements that help their adoption”

We wanted to figure out where they are, so we created this Lego wall, literally an analog Lego wall, and we invited the customers to place Legos…

Fergal: Legos as in Lego bricks?

Arin: Lego bricks.

Fergal: Awesome, awesome.

Arin: It’s a tangible way of research and data collection. They were asked to sort the Lego blocks into their AI maturity and different phases of their journey. From that, we found that data lineage is one part that really bothers them, and so we ended up creating a product for it. We also tend to have a lot of fun with research.

Fergal: I guess if it’s fun for the customer, you start to hear what they really think. People relax, they’d have their guard down, and they give you the good stuff. They don’t tell you what they think you want to hear, right?

Arin: Yeah. That’s part of the relationship-building. We can build better products if we can understand user expectations and psychological elements that help their adoption. When we get them in a setting where it’s safe and we are creatively brainstorming, we break a lot of boundaries, so that really is helpful.

Learn, reflect, and iterate

Fergal: We find a huge thing that research helps us with at Intercom is the problem definition. Paul Adams, our Head of Product, is always telling us to spend more time defining the problem, really understanding the problem that we’re trying to solve to make sure we don’t spend a whole amount of R&D effort solving a problem nobody cares about. And research is absolutely key for that.

One thing we often find when trying to research for AI products is that it’s very difficult to actually get good feedback from customers before we have a prototype, before we have a piece of an earlier version of the product that will actually run on their data because it’s just too abstract otherwise. You can tell someone, “Hey, do you want a self-driving car?” and at a really, really high level, everyone is going to say yes. But when you actually get into it, and you say, “Well, okay, this is how it’s going to perform on your road right around your house, and here’s the first version of it,” the complexity arises.

We often find that we need to build prototypes very early when doing research for AI products. I’d love to understand if that’s something that resonates with IBM’s experience, or if perhaps at the scale that you operate, you handle your problem definition or your product definition in some sort of different way.

“We use prototypes as almost provocations for discussing the problems that need to be solved”

Arin: We’re pretty much in the same boat. I think we have a “make to learn” culture. We use prototypes as almost provocations for discussing the problems that need to be solved. Sometimes it spurs other ideas. Sometimes it validates the use of mental models and their use cases and requirements, and sometimes it weeds out the vanity projects. AI is somewhat of a black box in a lot of users’ minds, and to be able to expose what AI does, prototypes go a long way.

Fergal: Without a prototype, it’s not real enough to really understand either it solves the problem or not.

Arin: Yeah. Sometimes we use prototypes to generate the problem statements to solve for, and that becomes an interesting starting point. We call it speculative design because we try to venture out with different starting points. If you can articulate them into visible, tangible concepts, then we find we get much richer discussions, and that helps us.

Fergal: Can I ask what would be an acceptable hit there? What percentage of things you would bring to that “make to learn” or prototype phase and eventually make it through to production? Do you feel bad if you discover 90% of the things aren’t suitable? What sort of failure… maybe failure’s even the wrong way of putting it. What sort of learning rate would be acceptable?

Arin: Good question. I would say that, because we have this design thinking framework and one of the part is to emphasize the fact that everything is a prototype, and if everything is a prototype, we expect it to evolve over time. Hence, we try not to get too overly emotionally attached to a specific idea or thought. In that sense, I would say that maybe 20% of our initial ideas get into product, and that’s a pretty good hit rate.

Principles of AI

Fergal: That sounds great to me, and particularly at the scale you operate. Wow, that sounds amazing. Are there any other sort of design challenges unique to AI products that you encounter? I remember you mentioned alignment and the user as the north star and so on. Any other design challenges that come to mind?

Arin: Designers really have to think in terms of building trust, and that’s not something we are trained to do. We do have empathy, we tend to understand the users as much as we want, but to build trust, we really need to start looking into things like the voice and tone, the timing of the interaction, how believable it is. If it’s a conversational AI, do we remember past conversations? Knowing what’s important to each other, giving feedback and giving the avenue for feedback are all important things.

“Let’s not build AI for AI’s sake”

In terms of challenges, first, we’ve got to figure out if the cost of building the AI use case is actually worth the benefit it’s going to give to the user. Let’s not build AI for AI’s sake. Deciding the right for it to be transparent, so where is the AI used, what is the value to the user, these are things that, as designers, we need to watch out for and make part of our design process.

Finally, explainability. Users need to decide if the insight that’s provided by the AI is accurate. How do we make that clear and transparent to users? There are a lot of new things we have to learn and evolve as designers and not take for granted, especially when it comes to AI.

Fergal: There are three things I’d love to just understand a little bit deeper there. When you mentioned trust, do you mean trust in the competence of the system, or do you mean more like ethical trust, trust that your data is going to be used for good, that’s in your interests, or do you mean both?

Arin: I mean both. I’ve got a 10-year-old. He is very curious about technology and is learning about AI, but he has this inherent fear that AI is going to take over the world. There’s that part of it. The second part is, what is the value that AI is providing? Do I actually trust what it’s telling me? It’s a little bit of both, but I feel like we are early on this road. Trust is earned. You can’t just have trust from the get-go. As designers, our challenge is to build it into our product.

Fergal: I 100% agree with that. I’m delighted to hear you say that. That totally resonates with me as well. You mentioned explainability there. When I was looking through IBM’s design for AI, you’ve got this beautiful website, and explainability is actually a fairly top-level heading there. Do you think explainability is necessary for ethical AI?

Because you can have a big, incomprehensible neural network that perhaps makes state-of-the-art predictions, but it’s very hard to explain exactly why it does this. Does a focus on explainability mean that we shouldn’t use those black-box, un-interpretable methods, or does it mean something else? Is there a way of using those, but binding that with another part of the system that increases trust? I’d love to hear you talk about that.

Arin: I think that explainability and ethical relationships are mutually related. You can’t have one without the other. When we think about ethics, it’s more about what you use AI to do and how you do it, so how do you maintain fairness, how do you secure data, keep users in control of decisions, etc.

“In some ways, explainability is like the nutrition label that you find behind a box of vitamins”

Explainability is really the adoption piece of it. It’s opening the black box. It’s showing the quality, so the source of the data, how recent has the data been generated, what’s the diversity of data, what’s the volume, and what’s the rationale of the model. In some ways, explainability is like the nutrition label that you find behind a box of vitamins.

Fergal: That’s a great analogy. Yes, all right. Maybe, if I can take that analogy a little bit further, maybe if someone says, “Hey, here’s this wonderful dish, but I can’t tell you what’s in it,” I’m going to be suspicious. I’m going to say, “Well, actually, no. Maybe I’ll probably want this one, but I don’t actually know what’s in it. I’m not going to eat something delicious if you won’t tell me that.” That’s great. I’m always searching for analogies because I think so much of the work in this field of AI is about explaining even the complicated technology we use.

Arin: In terms of some user research we did, based on what do your users think about AI systems and what are the key adoption criteria, trust becomes one of the important parts, and trust can be mitigated with explainability and such.

Around 70%, 80% of customers we work with said that to trust their AI’s output is fair, safe, and reliable is hugely important. Almost 80% to 90% of them said that their organization had been negatively impacted by problems like bias with data or AI models. When we have that inherent challenge in trust, explainability becomes even more critical.

Correcting for bias

Fergal: Does that guide you towards a certain technology direction then as well? Does that even increase the bar at which point you’d consider deploying an AI system?

Arin: That is precisely the point. We want AI models to be performant, to be accurate, and to be fair, all three things, from a business perspective. From a user perspective, they would like AI systems to be fair, to be accurate, to give them the insight that they need, and help them get their job done.

“In creating the AI models, you need to train them up with good datasets. It’s garbage in, garbage out”

There’s trust on both sides, but in creating the AI models, you need to train them up with good datasets. It’s garbage in, garbage out. If the dataset is not right, your eventual model performance is going to be pretty bad. But then, we also have to build in some diversity of thoughts and ideas on how the AI models are built in.

For example, let’s say we are building an AI model for a mortgage app. You fill in a form and you apply for a mortgage loan or fund. Imagine if you had an AI model for it and it was built on datasets that had more males than females, for example. Will it bias the model accuracy towards male applicants? You don’t want it to be, so you have to mitigate that. Those are little things we need to pay attention to so that we can trust AI systems more.

Fergal: Got it. AI systems are unique or at least very unusual in that they change and they learn even after we build the system. The data coming in that maybe we don’t see until production time can actually change its behavior. How do you manage that risk? You’ve built an AI system, you were happy with its performance when it was built, but now it’s in the wild, it’s seen more data, and maybe it starts doing something you don’t want or making trade-offs that we wouldn’t approve of or we wouldn’t be happy with? How do you correct that? Any thoughts on how design can help with that?

“AI is not a one-and-done thing. It has a lifecycle, and the lifecycle doesn’t end with deploying it into production”

Arin: Yes. If I were to characterize AI, AI is not a one-and-done thing. It has a lifecycle, and the lifecycle doesn’t end with deploying it into production. The lifecycle extends to, “Okay, now it’s in production, we have to learn how the AI system is working, how performant it is, how accurate it is, get the user feedback.” The feedback loop becomes important, not just for research, but also for instrumentation-level data we can collect on AI systems. If we connect them together in a loop, that’s where we’re going to help in making it better, et cetera.

There are tools and technologies out there to do model performance and bias detection and drift detection and things like that post-production. IBM has one called IBM Watson OpenScale. We use the combination of tooling, instrumentation, user research, funnel the data back, and in some cases, re-train the dataset and re-deploy.

Building sustainable AI

Fergal: Is there a maintenance cost associated with that? Would you then shy away or be reluctant to apply AI to a solution where you weren’t willing to pay that ongoing cost of continuous improvement?

Arin: Yes, indeed. In fact, when you look at our customers or the organizations that we work with in the adoption of AI, everyone wants to harness this machine learning and AI, but they haven’t really thought through the cost element of it, the lifecycle of it. There’s no magical way to add AI to a product and make it work. You have to make it scale and be iterative and run experiments.

Building sustainable AI takes organizational change. We need to provide time for teams to learn and implement into the continuous delivery process. It’s a never-ending lifecycle of maintenance. You have to maintain your models and the data ops that come with it.

“Start with something that could be done in six to 12 months so you can vet out the operational and team costs you need to scale the AI”

If you’re getting into the field of AI and you want to infuse AI into your products, start with something small. Start with something that could be done in six to 12 months so you can vet out the operational and team costs you need to scale the AI delivery and maintenance and the time needed to collect and re-prep data and so on.

Fergal: That touches another debate that I hear sometimes, which is, we’re seeing increasing maturity. I agree with you; I think earlier you said that it is still very early days for AI in terms of adoption. That totally makes sense to me.

We are seeing a gradually increasing maturity. One question that often comes up is if, in 10 years, this area of AI for products is going to be solved to the point where you just go and download a library from somewhere, a little bit like how if you want the database now, our databases are pretty good, you don’t need to write your own database each time.

“There needs to be intentional decision-making to contribute towards clear guidelines and policies and principles of ethical AI”

I’d love to understand how you think about the maturity of this. Do these ethical issues or trust issues specific to AI mean it’s going to take a very long time before you could just buy it off the shelf without going on that journey?

Arin: Great question. I think that AI technology right now is a little bit ahead of adoption. That’s one. 10 years down the line, I feel like AI is going to be a fabric of everything that we use. It’s invisible, implied there is a lot more trust built in.

For it to be like the example of databases, it’ll take a little bit of time. There needs to be intentional decision-making to contribute towards clear guidelines and policies and principles of ethical AI. Big companies like IBM are working together on generating these AI values and principles. I feel like once the principles are known and everyone plays within the same rules, it will become inherently more available.

Transparent AI

Fergal: Got it. You mentioned the principles there. I believe one of your principles I read, and I may be misquoting here, so correct me if I am, but one principle is to be very clear when the user is dealing with an AI and not a human being, just AI transparency, and don’t be tempted to build an AI that pretends to be a human.

Where did that come from, and what other reasons we should keep in mind particularly when designing an AI that customers actually interact with to get something done, in an explicit way like conversational AI?

“There are trust issues on what AI does or doesn’t do, so it’s important to be very transparent on what the user is interacting with”

Arin: Because there are trust issues on what AI does or doesn’t do – it’s important to be very transparent on the level of engagement and what the user is interacting with. If it’s a bot, you need to not hide it, not make it seem like a human being, because there is this relationship design happening here. If you break the trust early on, you’ll never get a user back. Transparency then, in terms of principles, is to always be clear about how and where the AI is being used.

It’s also about privacy, in some ways. It’s about safeguarding customer and consumer privacy and rights. It’s ensuring the security of models and data and all of that. At the end of the day, we need to be transparent with our users that the AI is good, it’s being driven from datasets that are trustable, it’s giving them insights and data points that they could actually make their job better. Explainability and transparency are interlinked.

Fergal: 100%. Before we finish up, one question that I like to ask people on these podcasts is if there’s someone in the industry that you aspire to or you’re inspired by, or whose work you love and you’d recommend listeners who are interested in learning more about this to go check out?

“I’m a big fan of Jony Ive. Not just for the products, but at the designing for the details, right?”

Arin: My answer might be a little cliché, but I truly believe in it. From a craft standpoint and quality of design, I’m a big fan of Jony Ive.

Fergal: Jony Ive from Apple.

Arin: Yes. Not just for the products, but at the designing for the details, right? On design as a practice, user experience, usability, the ethos of building products with the right heuristics, for that, I look up to Don Norman. He is probably one of the reasons why I’m in this field. When I read his book, The Design of Everyday Things, it opened up my mind a lot. Those are the two people I can think of.

Fergal: Fantastic. Two designers, and sometimes I’m out of my depth on a design discussion, but two designers that I’m familiar with. Lastly, before we finish up, where can our listeners go to keep up with you and your work?

Arin: Twitter, LinkedIn, and I have a Medium publication where I tend to share stories as well, but I’d also say that it’s not just me, it’s based on the goodness and the great work that IBM does. If you’re interested in knowing a little bit more about some of the things I talked about, just go to IBM.com/design/thinking. It has a lot more information about how you can start thinking about designing for AI. There’s an AI essentials toolkit. You can try that as well.

Fergal: Fantastic. Thank you so much. I definitely have been checking out those resources in preparation for this at IBM.com and I thought they were great. Thank you very much for being here with us today and can’t wait to see the direction that you and IBM go in this area in the future.

Arin: Thank you so much.

Resolution-Bot-Ad