From the course: Artificial Intelligence and Application Security

Generative AI

From the course: Artificial Intelligence and Application Security

Generative AI

- So what is generative AI? A dictionary definition of the term generative says, "Relating to or capable of production or reproduction." When I think about normal or non-generative AI, I'm thinking about asking a question and finding the answer in a very large data set. Generative AI, or gen AI. is distinct from non-generative AI in that it actually creates something new. Generative AI does not simply find an answer in a data set. It learns from that data set and produces new content based on patterns and probability distributions. Three of the most common formats for generative AI include text, images, and audio. Chat bot assistants, such as GPT-4 and Claude, produce text-based responses to questions or directions. For example, "Give me an itinerary for a two-day trip to San Francisco." Image generators, like Dall-E and MidJourney, create visual outputs based on conceptual descriptions and prompts. I could ask the AI to draw a cat riding a horse in photorealistic style. AI music tools allow users to input text instructions and produce songs. These include Suno, Udio, and MusicLM. For example, a heist, planned and executed by a group of dogs, spy jazz, bossa nova. How does generative AI work? One way to think about this is to imagine that you're starting a sentence and then asking the generative AI system to complete it based on the information it can learn from a training data set. For example, if I have a generative AI system and I say, "I like to eat fried chicken and blank," then a generative AI system would respond with a guess for filling in the blank. Depending on the dataset that the generative AI was trained on, it might say, "I like to eat fried chicken in waffles," or, "I like to eat fried chicken in eggs." An important aspect of generative AI is that it is creative and unpredictable. I like to think of it as though generative AI is using its imagination. The problem with imagination is that sometimes, imagination produces results that are completely absurd. When this happens in generative AI, it's called a hallucination. If I say, "I like to eat chicken and blank," and the gen AI system responds with, "I like to eat fried chicken and astronauts," that would be considered a hallucination, or an absurd response that doesn't make any logical sense. While a hallucination response from a generative AI system is not an application security problem in the traditional sense, it's important for us to understand that generative AI does not create fact or truth. Rather, it takes data inputs and experiments with data outputs. The security takeaway when it comes to generative AI is that there must always be a human in the loop when generative AI is being used for critical business processes. Organizations are accountable for the outputs of their AI systems, and they should always be checked by an actual person before it is used and actioned upon.

Contents