Chapter 8Weaponizing Technical Intelligence
So far, we have discussed how LLMs can be weaponized to manipulate individual targets or even achieve mass social influence at scale. As bleak as all of this sounds, it unfortunately gets worse. In addition to the numerous social risks related to the increasing sophistication of language models, there are also numerous technical risks. We previously discussed the concept of emergent properties—the fact that certain unintended (and sometimes unexpected) capabilities have emerged from the progressive scaling of LLMs. One of these properties that has emerged is the ability to not just communicate with human language, but also to effectively interface with other computer systems. This property includes the ability to organize information into common data structure formats (such as CSV, JSON, and XML), the ability to generate valid requests using common interface specifications (such as REST, SOAP, and other API formats), and the ability to generate custom code in many different coding and scripting languages. These LLMs can effectively operate as a bridge, translating communications between human language and machine interfaces. This capability introduces risks far beyond the social manipulation capabilities that we have discussed thus far. These technical risks fall into one of two categories: unintentional technical oversight and deliberate technical exploitation.
Unintended Technical Problems
Even for technologists with the best ...
Get The Language of Deception now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.