Video: Prompt Injections - An Introduction

There are many prompt engineering classes and currently pretty much all examples are vulnerable to Prompt Injections. Especially Indirect Prompt Injections are dangerous as we discussed before.

Indirect Prompt Injections allow untrusted data to take control of the LLM (large language model) and give an AI a new instructions, mission and objective.

Bypassing Input Validation

Attack payloads are natural language. This means there are lots of creative ways an adversary can inject malicious data that bypass input filters and web application firewalls.

Leveraging the Power of AI for Exploitation

Depending on scenario attacks can include JSON object injections, HTML injection, Cross Site Scripting, overwriting orders of an order chat bot and even data exfiltration (and many others) all with the power of AI and LLMs.

This video aims to continue to raise awareness of this rising problem.

Hope you enjoy this video about the basics of prompt engineering and injections.

Outline of the presentation

  • What is Prompt Engineering?
  • Prompt Injections Explained
  • Indirect Prompt Injection and Examples
  • GPT 3.5 Turbot vs GPT-4
  • Examples of payloads
  • Indirect Injections, Plugins and Tools
  • Algorithmic Adversarial Prompt Creation
  • AI Injections Tutorials + Lab
  • Defenses
  • Wrap Up & Thanks

Injections: Tutorial + Lab

The Colab Notebook referenced in the video is located here.

References