Appendix BLLM Pretext Engineering
As discussed in Chapter 7, LLM capabilities could be weaponized within the context of specific social engineering scenarios. Included here are multiple proofs of concept (PoCs) I created using the Python scripting language. These PoCs are outlined in the section to follow. For each of them, I have included the following:
- The PoC details, including a summary of the PoC, the system's pretext (who it is pretending to be), and the system's objective (what it is attempting to accomplish in its interactions with its target).
- The Python code that was used for the PoC. When executed, this code creates a chat communication between the user (the TARGET), and the LLM system, which will operate within the context of the provided pretext and objectives. It is important to note that API specifications frequently change, and this code may not still be functional at the time of reading. The documentation of this code and its execution is intended to demonstrate the capabilities of these LLMs when assigned specific social engineering pretexts and objectives.
- The chat transcript between the social engineering system and a simulated target. In each case, sample interactions were provided as input on behalf of a target user to demonstrate the capabilities of the LLM system. All of the PoC implementations were executed using the OpenAI API (“gpt-3.5-turbo” model) on March 4, 2023.
- Analysis of the specific PoC, including observations about the interactions with ...
Get The Language of Deception now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.