In The Language of Deception: Weaponizing Next Generation AI, artificial intelligence and cybersecurity veteran Justin Hutchens delivers an incisive and penetrating look at how contemporary and future AI can and will be weaponized for malicious and adversarial purposes. In the book, you will explore multiple foundational concepts to include the history of social engineering and social robotics, the psychology of deception, considerations of machine sentience and consciousness, and the history of how technology has been weaponized in the past. From these foundations, the author examines topics related to the emerging risks of advanced AI technologies, to include: - The use of Large Language Models (LLMs) for social manipulation, disinformation, psychological operations, deception and fraud - The implementation of LLMs to construct fully autonomous social engineering systems for targeted attacks or for mass manipulation at scale - The technical use of LLMs and the underlying transformer architecture for use in technical weapons systems to include advanced next-generation malware, physical robotics, and even autonomous munition systems - Speculative future risks such as the alignment problem, disembodiment attacks, and flash wars.
Perfect for tech enthusiasts, cybersecurity specialists, and AI and machine learning professionals, The Language of Deception is an insightful and timely take on an increasingly essential subject.
Table of Contents
Introduction xi
1 Artificial Social Intelligence 1
2 Social Engineering and Psychological Exploitation 19
3 A History of Technology and Social Engineering 53
4 A History of Language Modeling 83
5 Consciousness, Sentience, and Understanding 127
6 The Imitation Game 151
7 Weaponizing Social Intelligence 175
8 Weaponizing Technical Intelligence 215
9 Multimodal Manipulation 239
10 The Future 257
11 The Quest for Resolution 283
Appendix A: Bot Automation 295
Appendix B: LLM Pretext Engineering 303
Appendix C: CAPTCHA Bypass 317
Appendix D: Context Manipulation Attacks 321
Appendix E: Attack Optimization with Monte Carlo Simulations 333
Appendix F: Autonomous C2 Operations with LLMs 349
Appendix G: Disembodiment Attacks 353
Bibliography 357
Acknowledgments 373
About the Author 375
Index 377