From the course: Agentic AI: Building Data-First AI Agents
Mitigating risks when building agentive AI
From the course: Agentic AI: Building Data-First AI Agents
Mitigating risks when building agentive AI
- When you build software, you often talk about the principle of least privilege, right? When you have a bunch of users, you try to ensure that any person who comes into the system has only the bare minimum of privileges that they need to do their task so that they can't go and do something they're not supposed to do, or in some way influence the data that's there. You know, a classic example would be someone is able to input data into a database, but they can't change existing data there, and they definitely can't remove data that's there. But when we build AI agents that are supposed to act on our behalf, those agents need to have a broad range of capabilities, but then we need to think about under what circumstances are these capabilities available to the agent to use so that the agent doesn't, I guess, go rogue and do strange things to the data that it wasn't supposed to. The way we think about data governance is centered around we're governing people and their interactions with data, but when we have AI, we need to rethink data governance to consider what the AI can do with the data. - When we talk about governance with AI, AI governance, it includes data governance, obviously, and we have to think of all the mitigation layers of where you can control, what can you control in the realm of things. So you have data, but you have models, and now you have applications sitting on top of these, and you have your environment too from where the data is flowing in on a regular basis. So think about the model itself having some amount of control and governance. And beyond that, you build a safety system and layer, that requires governance too, because if you're keeping it, you know, abuse proof or making it, you know, safe, you need to have some kind of rules in place, some guardrails. Then you have the meta prompt itself and the prompt. Now, this is the interesting part, right? What has changed now is you are speaking into this interface, you're typing into the interface, and that prompt itself is data that's going in and out of this agentic AI. So now for building applications, we need to also think about the safety, security, the governance of the prompt itself. And there's a whole new category now called jailbreaking of AI, agentic AI. Jailbreaking agentic AI would mean that people are using prompts, just English or any language prompts, to go insert instructions into the AI application to completely screw it up or hack into it, right? So think about it that way. So we need to make it governable from that perspective too. And then obviously the user experience. So user experience at large is what we are dealing with day to day for any AI app. The more conversational it is, the more we need to think about the user experience and the governance around that to make sure that it's a safe environment, a usable environment, a reliable environment. So both the application layer, the platform layer, and, of course, the user layers have to be all considered for mitigation, risk mitigation, and it can happen anywhere. And so the governance main principle is to reduce the risk, risk of using AI applications to do the wrong things or to end up in abuse of the systems that you have in place today.
Contents
-
-
-
The importance of data in AI agents3m 32s
-
Dealing with data puddles5m 5s
-
Bringing structure to agentive AI data3m 28s
-
Mitigating risks when building agentive AI3m 23s
-
Agentive AI and data governance3m 4s
-
Responsible AI and data4m 22s
-
When to build agentive AI systems4m 38s
-
How to build trust in AI agents5m 17s
-
The data lifecycle of agentive AI4m 56s
-
Using AI as a data opportunity3m
-
-