Rotary Club of Bombay

Speaker / Gateway

Rotary Club of Bombay / Speaker / Gateway  / Mr. Sandeep Datta, President, Amazon Web Services (AWS) India and South Asia, and Mr. Pramod Boga, Lead, Generative AI and Machine Learning, Business, AWS India and South Asia on the future of AI, is now!

Mr. Sandeep Datta, President, Amazon Web Services (AWS) India and South Asia, and Mr. Pramod Boga, Lead, Generative AI and Machine Learning, Business, AWS India and South Asia on the future of AI, is now!

Mr. Sandeep Datta:
Thank you. I’m sure you would have guessed by now that the real expert is Pramod. I am just going to bask in, or I get to bask in, the glory of what he and the rest of our fellow Amazonians do for our customers day in and day out. I’ve been with AWS for three months now, but I spent 21 years at Accenture prior to this, working with clients in India. And if there’s one thing that I’ve seen, it is that the pace at which AI, and now GenAI, is going to end up impacting all of us is unprecedented. And as we talk through the next half an hour, 40 minutes, there are two or three things that I’d like to point out.

If I look around and talk about what our customers are doing — we have customers that include startups, public sector undertakings, academia, and enterprises — our customers are either renovating or innovating. But what is most heartening, and I think as Indians, what we should be very proud of, is the fact that the pace, the depth, and the expanse of innovation happening in India is unprecedented.

When it comes to AI and generative AI, I think India will lead the charge. And why do I say that? If you look at most of the large service organisations — the likes of TCS, Wipro, HCL, Accenture, IBM, Genpact, WNS — most of their nerve centres and core capacity are located in India. So there is no reason why we will not lead the charge. In fact, most of the development and maintenance of what we call use cases in generative AI will happen in India and will be put to use outside India.

Second, as we talk about generative AI, we should also be mindful that a core associated theme is responsible AI. And as things evolve, responsible AI will continue to be core and central to how generative AI disrupts us, our clients, industries, and society in general.

I also talked about the fact that India will have the opportunity to lead the charge. There is also an implication and a responsibility ahead of us, which is the impact on academia. Because very soon, as the impact of GenAI takes shape, it will have a direct bearing on what our students need to be trained on. The industry will expect academia to keep pace with the requirements of generative AI. Therefore, skilling in general becomes a very critical theme as we speak about generative AI.

Third, we should also be mindful of the fact that, contrary to what some of us think or perceive, the government is leading the charge on generative AI. Now, there are core sectors like defence or core banking, which have regulations. But if you look at public sector undertakings, or at 15 of the two digital stack initiatives — seven of which run on AWS, namely DigiYatra, DigiLocker, CoWIN, Aarogya Setu, and the Government e-Marketplace — they are all case-in-point examples where the government is using AI, digital, and generative AI at scale.

Today, I don’t think we fully recognise the importance, but thanks to the strength of our underlying banking infrastructure, our UPI payment settlement mechanism is real-time. It operates at a scale that spans 150 crore Indians and settles instantaneously. Even if you look at PayPal and some of the other case examples in the West, they take 24 to 48 hours to settle.

So right in our backyard, right here at home, we are seeing enough case examples that give us the confidence that India will lead the charge, as I mentioned earlier. This is why, as AWS, very recently, we have reinforced our commitment and invested an additional $8.3 billion into the existing capacity we have in the state of Maharashtra, specifically in Mumbai. This investment has an estimated GDP impact of $15.1 billion and is expected to create close to 80,100 jobs by 2030. Globally, we see the potential and impact, which is why we continue to invest in India.

The last thing before I end is that generative AI will now move beyond creating images and videos to use cases that are highly industry-specific. For example, when I met the chairman of BPCL a few weeks back, his request to us was: “I have a planned refinery shutdown that lasts for a month. I spend ₹6,000 crores on the refinery shutdown. If you can come in using AI, data, machine learning — whatever it takes — and reduce my refinery shutdown by just half a day, for my business, that’s a direct impact of ₹100 crores.”

So increasingly, CEOs will expect organisations like ours and service providers to develop use cases that are highly industry-centric and relevant, creating a direct business impact.

 

Mr. Pramod Boga:

So I basically lead the Generative AI and Machine Learning Business for AWS in India. A little bit about me — I’ve been with Amazon for close to a decade now. I’ve had the privilege of guiding customers through two key technology shifts. One was the adoption of cloud computing about a decade ago, and now, with Generative AI. This gives me a unique vantage point, allowing me to witness this journey firsthand. I look forward to sharing some of my learnings with you all.

Okay. I understand that we have a mixed audience here, with varying levels of knowledge. So, I’ve tried to keep it a bit basic. I won’t go too deep into technology, though I’d like to. Feel free to stop me if I get too technical, but I’ll keep it at a level that’s accessible to everyone. We’ll talk about the evolution of machine learning in general.

How has Generative AI come about? What is all the hype and hoopla about? We’ll then delve briefly into the kind of Generative AI applications built at Amazon.com, some of which you may have already experienced through the app. We’ll discuss the innovation opportunities that exist today across multiple business verticals. Most of you are running businesses or are part of businesses, so you’ll want to understand that. I’ll show a quick demo of how it looks in practice using a real use case, and then I’ll wrap up by discussing some of the developments likely to happen in this space over the next 12 to 18 months.

So, let’s start with the evolution of machine learning. First, let’s demystify a few terms. Many people use these interchangeably, but let me clarify. Artificial Intelligence (AI) refers to any system that performs tasks mimicking human intelligence. Essentially, any AI system looks for a probabilistic outcome, meaning it predicts an outcome with a high degree of certainty — similar to human judgment. That’s what AI is — it mimics human judgment.

Now, almost every AI system relies on machine learning underneath. What is machine learning? Simply put, it analyses a vast amount of data to derive decision logic, which is essentially called a model. For example, if you want to create a model for predicting apartment prices in Mumbai, you’d first supply it with 10 years’ worth of data across different areas. Then, if you ask the model a question — say, “What will be the price of an apartment in this particular locality next year?” — it will provide a prediction based on the patterns it has learned from the data. That’s machine learning.

Deep learning is a subset of machine learning. It works on neural networks, which mimic the human brain. Our brain has the ability to recognise patterns, understand concepts, and draw conclusions. Neural networks function similarly. In deep learning, if you train a model to identify a cat in a picture, you feed it multiple images containing cats and other animals in various settings. Then, when you provide a new image, the model can determine whether a cat is present or not. That’s deep learning.

Generative AI is a part of deep learning. It is powered by very large models known as foundational models, or as many of you might be familiar with, large language models (LLMs). Let’s understand what a foundational model is. Foundational models are built on an enormous number of parameters — typically in the millions or even billions. You might have heard of models like LLaMA, which is a 70-billion-parameter model, or HuaWei’s 120-billion-parameter model. But what is a parameter? In machine learning terms, a parameter is any variable that significantly influences the model’s output.

Going back to the Mumbai apartment price prediction example, parameters would include factors such as area, historical pricing trends, and location (e.g., Bandra vs Khar). These factors impact the model’s predictions. Now, imagine a foundational model with billions of such parameters. This scale allows it to learn highly complex concepts.

These foundational models are pre-trained on internet-scale data, meaning an enormous volume of information. I use the term “internet scale” just to give you a sense of how vast this data is. Because of this extensive training, foundational models can apply their learning across multiple domains. This is what makes them so powerful — you can quickly fine-tune them with minimal additional training to fit your specific business needs. I’ll walk you through some examples of how this works in practice.

So now you all might be wondering, OK, fine. Machine learning has been here. AI has been here. Machine learning has been here. Deep learning has been here. And you also say generative AI has been here. How is it that it’s just caught everyone’s imagination now? Like, what has led to it? So there are two, three factors. The first thing is with cloud computing providers, such as ourselves and others in the market, we’ve made sure that almost every organisation, irrespective of size and scale, has access to scalable compute capacity, which is very important if you’re trying to build out models. The second thing is, there has been an explosion of data, especially which has been accelerated in the COVID era as businesses moved online, because you just had to interact with customers online. That’s where every customer became a data point. Every interaction became a data point. And the third thing is the development and advancement of the machine learning technologies itself, especially in the transformer-based neural network architectures. That’s the tech bit that I was talking about.

Now, to give you a feel of what generation means. So this is a text generation example. So now, this is a statement I’ve just started writing. Now I want to, I’ll probably give it to the model and so I’ll finish this. What the model essentially does, based on the training, so if I poll all of you, some of you might have said, man, people, horses. It depends on who would have said what after that. The point is, it takes a distribution. It understands that the maximum is man. So I’m going to put a man there. Similarly, for the rest of the statement, it does that distribution. Now, this distribution is coming from all that data it’s been trained on. So obviously, this is a famous quote. So it gets that and it completes it, as simple as that. So this is, obviously, this is a very simple explanation of what text generation means and how it generates text. Because a lot of this needs a lot of it is a logical sequence of steps.

Now, let’s delve into generative AI at Amazon. So firstly, if you look at Amazon businesses across the years, we have actually been using machine learning to drive innovation for the last 20, 25 years. So it’s not something that we are new to. It’s something that we’ve been doing day in and day out, OK? Which means, if you see, obviously, Amazon.com, most of you have used it. Most of the products that we sell, all of the recommendations that drive it, based on individual buying patterns, that’s all underpinned by machine learning. We have Prime Air, which is basically delivered through drones. That’s also driven by machine learning. Alexa, most of you have some kind of home assistants at home. Again, natural language processing, driven by machine learning, or deep learning, so to speak. Even in our fulfilment centres, the robotic picking routes of packages, that’s also underpinned by deep learning, right?

(Ppt)
So if I just show you quickly how that works. So this is how it looks in a warehouse, by the way. So those are the Kiva robots, which are, again, specific to our fulfilment centres. They have their own paths. They pick, move packages across the warehouse. All of this is driven by deep learning. So sorting the packages, picking them up, putting them in the aisles, then picking them up, based on the way they’re headed, either for an order fulfilment, or stocking the warehouse. Everything is done through machine learning.

Sometimes you wonder, right? I mean, you’re trying to place an order, and you think, hey, how is it that these guys are gonna send it to me in the next one hour? That’s primarily because we have predicted and we have understood that this is the kind of product that will be needed in this area. This is the likelihood of the product, or the order coming in, and that’s how it’s done. So everything is underpinned by machine learning. Without machine learning, all of this is not gonna work, by the way, right? So at scale. So those are Kiva robots taking the aisles from here, from different places, into the factory for the processes, right? OK, now, in terms of whatever you’ve driven at Amazon.com, the first thing is, I think all of you would have noticed, if you were shopping on the Amazon app, you would have noticed that earlier reviews, if you had to pick a product, and you looked at reviews, you’d have to go sequentially to see, OK, if this is good or bad. Now you have a summarised review on the top. It tells you, in one shot, pros and cons, so what it does is essentially, it helps you make that decision faster, in terms of whether you wanna buy or not buy, right?

That’s a cool application of machine learning. It is, of generative AI, so to speak, it basically summarises all the reviews and gives it to you in one place. The other thing you would have noticed is also, there’s a Rufus icon now on your app, which is your personal, virtual assistant shopper kind of a thing. So if you walk into a physical store, you’ll have someone assisting you in your buying journey, so think of it in the online world. Have any of you tried Rufus? You have, OK, awesome.

So, yeah, so again, another great application of generative AI, where it’s trying to make sure that you’re able to discover whatever you want, without worrying about it. You just come into the app and you type, I’m looking for X, that’s it. It takes you to the journey to buy X, right? Amazon Pharmacy, what we have done, and I understand there are some doctors in the audience here, what we have done here is, now every doctor probably writes prescriptions in a certain way. They might write, if you ever take medicine orally, dosage, frequency, these are all the same words. I mean, they mean essentially the same, but they can have different terms mentioned in it. So what we’ve done with generative AI is that it has basically contextualised all those words, so it understands what the doctor’s written, and then ensures that the prescriptions are filled with no errors, OK?

Similarly, for Amazon Ads, for our partners who sell products on our platform, so we actually enable them to create ads for their products. So which means, obviously, there’s a lot of models that enable audio, visual, imagery, as well as generation. We’ve allowed them to create ads of their products using these models very, very easily on our platform. So again, just enhance their product appeal when you or I are gonna go and try and click and buy that item.

And last but not the least, at AWS, we use it for driving sales effectiveness, which means when we’re going to go and talk to a customer, we actually use an LLM to summarise all the information about that particular customer, latest stuff that they’ve done in the industry, bring in a lot of knowledge from our internal system, and give it as a summary to the seller who can then go and have a very pointed conversation with the customer. So it just kind of shortens the sales time, right?

Let’s talk about some of the innovation opportunities, right? Now, broadly you will see most of the use cases across these three areas. Now, most of these areas have existed because machine learning has already been helping solve these problems, but what generative AI has done is, it has enhanced the ability to do these, let’s say, progress on these use cases, and at the same time created some new categories also. So if I have to tell you which are the areas that have, you know, it has enhanced, if you look at chatbots, you look at personalisation, it is actually enhanced. How has it done that? If you look at chatbots, essentially they were rule-based earlier. You had to click option one, option two, option three, right? Now you can have a free-flowing conversation like there’s a human being at the other side. So that’s what generative AI has enabled because it’s now generating responses based on what has been queried to that particular app, right?

Similarly, if you look at personalisation, now you can hyper-personalise using generative AI at any point in time based on the person’s history. For example, if you’re shopping on the app and you have a certain, you have a certain, in the past, you have picked products of a certain category, you have liked certain descriptions, it will start generating descriptions of the other stuff that you see according to that. So for example, I might be looking at the same product, he might be looking at the same product, but the description might be different for both of us. That’s hyper-personalisation. And obviously around text generation, conversational search, all of these aspects are there. One of the areas where it’s actually opened up new categories is things like, obviously video generation, image generation is one, but in areas like synthetic data generation, which means I was just having a conversation with somebody around building a model for the financial world, I said the biggest impediment there is that you need to have quality data.

Now, in machine learning, that’s the biggest challenge. Your models are only as good as your data. If your data is garbage, your model is gonna be garbage, as simple as that.

So, what generative AI has enabled you to do is that you’re able to generate synthetic data, which means you could generate data that could mimic real data and you could actually build out your models. So that’s another area to look at, right? So I’m gonna just delve into certain sectors. If you look at financial services, apart from customer service, apart from, let’s say, basically assisting some of the customers, etc., the areas where you can actually innovate here and to the point of, I think, there’s a lot of curiosity around agentic AI, etc. By the way, agentic AI is primarily about looking at a process end-to-end, which means a business workflow end-to-end. It isn’t simple RPA, which is a robotic process automation, and it’s not just automation in general, it’s just intelligent orchestration, by the way, which is AI-enabled. So for financial services, we have a customer called Canara HSBC Life. What they’ve done is they’ve actually enabled their underwriters to use LLMs.

So which means every underwriter today needs a lot of data before making an informed choice in terms of either approve the claim, disapprove the claim, blah, blah, blah. All that information is brought to them by the LLM in the format they want. Today, multiple hospitals, multiple players in that chain have different formats of documents. Now, these companies are only as good as how well they could actually read those documents. Now, with generative AI, they’re able to actually pick documents irrespective of the format, irrespective of the language. So it just made it so much quicker, faster, better. So that’s a great example from the financial services space.

Media entertainment, again, an area that’s evolving very, very quickly. A lot of people associate this area, they say it should start developing its own web series, it should just come up with its own videos and all that. So it’s a little bit of a difference to distinguish from magic here. But the fact of the matter is, essentially, some of the applications here involve hyper-personalisation. What the experience for the users, multilingual capabilities in terms of instant subtitle generation or, let’s say, you’re creating voice-enabled experiences, a lot of that, the actual imagery, the actual video generation, that is still a lot of work still has to be done there. But I don’t think it’s still at the stage where it can develop its own thing end-to-end.

We have multiple customers here in the space. Some of them, one of the largest fantasy sports platforms in the country, use generative AI heavily with us. Some of the things that they’re doing is around generating contests based on where people are playing, how much money they’re putting, what are the kind of contests that I should come up with, or actually thinking of generating contests. So, again, multiple areas where they’re innovating, but this is one such place. Education, I think, again, another area which has been impacted disproportionately is education. Simply today, if you ask an entry-level developer to write a code, that is something that AI can do today. You can simply ask the model to write a particular part of code, it’ll just generate it for you. So what has happened is, for the developers, the bar has just gone up in terms of how and where they need to actually start adding value from.

Apart from the fact that education has come up with applications like automatic assessment of student, let’s say, papers or assessment sheets, etc. So it’s like an AI teacher, so to speak. It also has the capability to develop course content, based on, obviously, prior course content. Similarly, manufacturing is another area, proactive maintenance, demand forecasting.

Obviously, I showed you the Kiva robots example. Automotive is, again, where they’re using it to come up with new car designs, based on the successful ones that they’ve seen so far. So we can talk about this if anyone’s interested. I spoke about generating code. Customer support is the area that has been, by far, the most maximum-affected in terms of generative AI. Today, if you’re talking to an agent, you’ll probably not know if it’s human at the other end. So it’s that good.

I just wanted to quickly show you this demo in terms of how this looks in operation. So this is one of our customers. This is a financial services customer that developed this for their sales teams. Primarily, what happened is they were facing two challenges. One was they’re spending a lot of time training the new sales reps. Second is the sales reps are leaving and going in a way quickly, so there’s a lot of churn. So they actually started building out a sales assist bot, which can help them answer all questions about the particular offering that they’re selling. Now, based on the questions, it will generate an answer. It will also take into account the sales rep’s language proficiency while answering. So if the person actually posts a question in Hinglish, which is Hindi plus English, it will actually answer that as well. So it takes into account those complexities while it gives out your answers. What this did for them essentially is, one, it ensured that there was consistent messaging in terms of what the sales rep actually said to the customer in terms of what their offerings are. And second, it ensured that they don’t have to spend their time again on training.

Because any new sales rep that comes in, they’ll say, hey, you know what? Here’s the app. Use this when you talk to your customers. And I’m going to wrap up with the future of generative AI. I think you spoke about agents. It’s basically around orchestrating complex tasks.

Today, the underwriting example that I gave, that is actually powered by an AI agent. Similarly, if you look at any business process across your businesses, if it’s an end-to-end flow, you can probably automate it using an AI agent, which means you use a combination of your own systems, some automation, and an element between, with machine learning models, and there you have an agentic workflow in place. Multi-modal means taking into account different types of inputs, which means image, audio, video, text, all the possible kinds of inputs when you’re actually trying to build out models or generate responses. The third one is around multiple models, which means you’ll probably end up using multiple models in the same workflow, because every model has a certain specialty. It does certain things well. Some models do a text generation one well. Some models do image generation well.

The last bit is on AI policies. So the difference between a human being and a machine learning model, can you tell me what’s the difference? The difference between a human being and a machine learning model is the human being knows when not to answer.

Machine learning models are always programmed to give an answer. So what that tells is that the moment you get into that area, you need certain safeguards, certain security, certain standards, which means based on what you need to be able to control what the machine learning model gives out or the generative model, the LLM that in question is actually in terms of the output, how do you control it? That’s where AI policies and standards come in. Security, we are working with multiple governments across the globe, including RBI in India. So anyway, I hope generative AI is there somewhere for all of you in the future, but thank you for the opportunity and I hope you enjoyed the talk.

 

 

ROTARIANS ASK

Can AI be used to understand a person’s mind? Like, for example, like Trump, you know, what will, if you programme him with some input, what will be his output? And that is one. And then will organisations also be run on AI? Like if you can outsource it to AI.
Okay, so the first question actually, so what I said about personalisation, right? It tells you a lot. So today, a lot of us are online, right? Our lives are online. You go and like something, you go and share something, you chat something. If you have all that data, you can reasonably predict what that person is going to do next or be at a particular place. It’s not entirely legal, but it’s possible. And the second bit in terms of an AI organisation being driven by AI, see, AI right now is an enabler and it will continue to be, right? A lot of the time, people are talking about AGI, which is Artificial General Intelligence, which means it’s going to replace human beings. It’s not going to happen. Please understand there have been multiple technology advancements that have happened over the last 20–25 years, which have threatened the same thing. The point is, it just makes us smarter as humans. It ensures that the bar from where we start working is much higher. So we, like, you know, coding, for example, a lot of people now don’t need to write code that can be done by an AI agent. You think of applications on top of it. That’s where human intelligence will come in.

Will you throw some light on plagiarism and how it’s being used by some universities and how effective is it?
So plagiarism, obviously. I think when you look at AI, you can easily find out if that’s been plagiarised. And I’ll tell you how. A lot of these models have markers that they introduce. Like, for example, if you’re looking at imagery, a lot of these companies that generate images can actually have markers in that so that if someone’s trying to use it without their knowledge, it actually gets caught. So there’s a footprint. There’s a way to do this. In terms of content plagiarism in universities, I think a lot of it depends on where the data is coming from. For example, when Generative AI actually came about 18 months ago, a lot of the chatter was around, hey, you have used public data, but you haven’t taken my permission because some of that data is mine. And hence, I’ll sue you. And there were multiple lawsuits filed. The first country that came out with a standard on that was Japan, where they actually said, you know what? We’re going to treat public data as public data. You don’t need to take any permissions. But again, there are nuances to it. Today, when you’re developing data, as in models on top of data, you need to take into account permissions. Like, for example, some vendors do. And some of the models that are coming out today have taken that into account, by the way. It’s not something that they’re not. I mean, at the start, it was a debate. But slowly, slowly, that’s being settled.

Slightly away from this, everything, the heart of it is computing. So what’s this fourth state of matter which Microsoft has just announced?
Okay, so, you know what, honestly, I think a lot of it is about setting the narrative in the market. It’s about being seen and heard as the most innovative, you know, out there. For example, our CEO, Andy Jassy, has spoken about the primitives, which means stuff that is supposed to be building blocks, you know, so to speak. And he said, you know, the fourth one, apart from storage, you know, compute and databases, is AI. So, I think a lot of it is about how you set, you know, set the narrative in the market, honestly, and there are going to be many coming out. For that matter, Satya, you know, recently on a podcast, I think he actually said, you know, don’t take this AGI thing too seriously. It may not happen as fast as we think. There may be a lot of hype at this point, but, you know, we need to be realistic about what it’s actually going to deliver for enterprises. So I think a lot of, you know, debate on this, around this, I think that will still continue. What I think all of us need to worry about and focus on is essentially how does it affect our lives? Does it make things easier for us? Does it change our everyday experiences, which I think it is doing when you are using these apps? For example, all of us use, you know, a Zomato or a Swiggy, right, when you order food. If the image of the food item that you are trying to order, if that image appears very, very enticing, you’re probably more likely to click and place the order. So a lot of these platforms are working on taking the imagery that comes from the restaurant and enhancing those images using generative AI to make it more alluring so that you’re more likely to order the dish.

Can you help us build an AI model or an AI agent which will make it easier for the Rotary Club of Bombay to collect donations?
Yeah, why not? I think we’ll just have to understand what that process entails. I think it is probably largely intangible, and I’m not sure how that works here, but yeah, I would love to understand that. If it can be automated, I’m sure it’ll be automated.

I actually was going to ask something similar to what Arunji is saying—that what kind of AI can a club benefit from? You know, because we are not a for-profit organisation, is there something that comes to your mind and you can suggest to us?
Sure, the first thing that comes to mind is most of the time you guys might be having MOMs, right? Like you have a certain, you know, set of, like, you know, we met on this day and we did XYZ, and we did all of that transcript can be auto-generated using AI because as we speak, you can have voice-to-text done, you have an application that actually, you know, creates an entire total transcript and then uses a large language model to summarise it. The moment it gets summarised, then, you know, Sir can send it out to everyone else and say, okay, boom, this is what we did on that particular date.

As you understand, AI needs a lot of data mining, churning, and a lot of electricity and a lot of power. And exactly when we thought that it is going to be a very huge costly affair, China comes with its DeepSeek. Can you throw some light on it?
See, again, this is a great example of how narratives are set in the market. People started noticing DeepSeek only after Trump was elected to office, and boom, they released the model. So again, you can go out and, I mean, this is there on YouTube also. If you actually see, there were actually six months of research that they did. They also burned money. It’s not that they didn’t burn money. They also used a lot of infrastructure. But what is being set out is probably a fraction of what they did. Make no mistake, they have done a very impressive job. What they’ve essentially done is they have reinvented, challenged some techniques of training, tuning. They’ve used something called a mixture of experts models. Some typical techniques in building these foundation models where they’ve challenged it and they’ve achieved success. So that they’ve done, but in terms of the actual cost, they’ve also incurred quite a bit. So a lot of people don’t realise that DeepSeek actually released their models much before also. It’s just that it went unnoticed, by the way. The only time they got it, when they actually came up with this mixture of experts, which is a condensed version of the model, which actually matched some of the best models out there, is where it actually gained traction and virality, so to speak. But a lot of the time, it’s still underpinned by the same basics. It’s just about how you set the narrative in the market.