Is there some truth to agentic AI (AAI)? Can it do everything that is claimed? Is it possible to determine what to do without human input?

AAI is indeed a curiosity. Personal viewpoints aside, AAI is designed to operate autonomously much like children do when someone isn’t watching them. Your only hope is that the training they are provided will indeed guide them in deciding the appropriate course of action.
What about the potential benefits? In an IT world, systems only have trouble between midnight and 6:00 a.m. For anyone awakened in those hours, the thought of having most/some/all issues detected and resolved would be extremely desirable. Is this the future? Claims about AAI point to technology that could unlock levels of efficiency and productivity previously unimaginable.
Beyond IT, what can business teams expect from AAI? They might generate standard reports via robotic process automation (RPA), but circumstances change, possibly daily, so could those reports change based on how AAI interprets current events? Perhaps.
Current “core characteristics” of AAI include autonomy and proactive decision-making. Add to that self-learning, memory management (for past interactions), adaptability, etc. There may be concern about how long the agentic technology should remember a decision and the criteria for reaching it. Should an AAI have a fallback solution for any given scenario? The fallback for computers is “If all else fails, reboot,” which begs the question: Do we need a fallback approach for AAI?
The industry is recognizing that not all AAIs are the same and has devised five categories:
- Reactive/Reflex Agents: Considered the simplest type, as these agents make decisions based on specific inputs coupled with their “world” model. They do not learn from past interactions.
- Utility-based Agents: Ranks potential outcomes based on how they could meet goals.
- Goal-based Agents: Considered more proactive because they use the environment to develop their responses and typically can adjust their outputs as needed.
- Model/Learning Agents: Using various techniques, these agents can learn over time. In short, they can derive solutions when faced with an incomplete data set (they fill in the blanks).
- Multi/Hierarchical Agent: These would be in complex environments where higher-level agents orchestrate lower-level agents to manage a smaller sub-task while the higher-level agent focuses on the overarching goal(s).
AAIs may eliminate the need to interact with certain systems. Why bother when your agent can do it for you, right? Large language models (LLMs) may be of concern too, but they lack the ability to act autonomously. And agentic AI shouldn’t be considered an enhanced version of RPA, since RPA focuses on repetitive tasks and is very rules-based — while AAIs are designed to be adaptive, can work in a complex environments with complex goals, and are considered autonomous.
Agentic AI is not marketing hype. It is here, and it will mature.
Dan Kempton is the Sr. IT Advisor at North Carolina Department of Information Technology. An accomplished IT executive with over 35 years of experience, Dan has worked nearly equally in the private sector, including startups and mid-to-large scale companies, and the public sector. His Bachelor’s and Master’s degrees in Computer Science fuel his curiosity about adopting and incorporating technology to reach business goals. His experience spans various technical areas including system architecture and applications. He has served on multiple technology advisory boards, ANSI committees, and he is currently an Adjunct Professor at the Industrial & Systems Engineering school at NC State University. He reports directly to the CIO for North Carolina, providing technical insight and guidance on how emerging technologies could address the state’s challenges.
Leave a Reply
You must be logged in to post a comment.