*December 20, 2025* Despite widespread industry promotion of "AI agents" as transformative technology, there is no consistent definition of what an AI agent actually is. This lack of clarity is causing confusion in the market and potentially frustrating customers. As marketing initiatives capture and transform the term, the situation will only deteriorate. However, it's useful to understand the reasons behind this confusion. In my opinion, it stems from the different perspectives used when defining what an agent is, and a lack of prior reflection on which perspective is most appropriate. Some use a physical stance (what components?) and define agents as "LLMs equipped with instructions and tools." Others use a design perspective (intended functions) by saying that an agent is "a system that can operate independently over extended periods" or "systems that can be tailored to have particular expertise.” Perhaps more problematic are situations where these systems are described from an intentional stance, by ascribing beliefs, desires, and rationality assumptions to them. OpenAI published a blog post defining agents as "automated systems that can *independently accomplish tasks* on behalf of users." According to Salesforce, agents are "a type of system that can *understand* and *respond* to customer inquiries without human intervention." This perspective is problematic because it borrows intentionally-loaded terms without explaining what they mean. Eventually, these terms will decay without proper comprehension. What perspective should we use when discussing our work?. While it is true that these systems have become so complex that even the creators cannot fully grasp them purely from a physical and design stance, adopting an intentional stance risks attributing capabilities to these systems that they don't genuinely possess. The physical stance is too reductive, failing to capture the emergent behaviors of these complex systems. The intentional stance, meanwhile, can lead to inflated expectations and eventual disappointment. A balanced approach might be to use a modified design stance that acknowledges both the intended functions and the limitations of our current understanding. I'd suggest we stick to the design perspective (with the potential exception of marketing) Given these challenges in terminology, our company has chosen to adopt the term "Automatons" when referring to our systems. This deliberate choice provides clarity and precision in our communications, both internally and with customers. When we discuss "Automatons," everyone involved understands exactly what we mean - avoiding the ambiguity that plagues industry discussions about "agents." This terminology choice grounds our discussions in concrete reality rather than abstract concepts with shifting definitions. By consistently using "Automatons," we create a shared vocabulary that facilitates more productive conversations about capabilities, limitations, and applications, while sidestepping the philosophical traps that come with more intentionally-loaded terminology.