On Agentic AI. Q&A with Ryan Siegler.
Q1. What Is Agentic AI? And why does it matter?
Agentic AI refers to AI systems that can operate with a degree of autonomy. They can pursue goals, control workflow automation, and even make decisions independently. These systems are powered by Large Language Models (LLMs), which enable them to apply advanced reasoning to support decision-making, even in complex and dynamic environments. When given a task, an agentic system can plan how to achieve that task, determine which tools and external systems to use, take action to complete the task, and reflect on the process.
This matters because, with the power of LLMs and reasoning capabilities, agentic AI takes automation to a far more advanced level. Frameworks of agents can complete complex tasks without the need for constant human oversight. It shifts AI from being just a powerful tool to a partner that can complete tasks on your behalf.
This capability does introduce significant risk—for example, if an agent misinterprets goals and acts in unintended ways, it could cause real-world harm without human intent. Therefore, great care must be taken to test and set up effective guardrails for your agents.
Q2. What kind of sophisticated reasoning and iterative planning is used to autonomously solve complex, multi-step problems?
To solve complex problems, agents use several reasoning techniques:
- Decomposition: Based on the agent’s overarching goals and guardrails, it analyzes the given task and splits it up into smaller sub-tasks.
- Iterative Planning: After decomposing, the system creates a plan for each step, executes the plan, and evaluates the results. If necessary, it will adjust the plan iteratively until the results are deemed satisfactory.
- Self-Critique: Agents can evaluate their own reasoning mid-task, asking themselves if they are making progress or if they should rethink the approach. Often, this ‘meta-reasoning’ is used to self-check each decision before moving onto the next sub-task.
- Tool Use: Tools give the agent access to specific external functionality, like a calculator, database, or web search. This provides the agent with additional context that it would not otherwise have, enabling it to take more informed action.
- Memory and Context Management: To accurately handle complex, multi-step problems, the agentic system must have both short-term and long-term memory to ensure full context availability for decision making. This allows for the tracking of progress, learning from failures, and adaptation of strategies on the fly without restarting the process.
Overall, the process is similar to how a human project manager or developer would approach solving a complex task.
Q3. What does “autonomously” mean in this context?
Autonomously means that the agentic system can operate independently, without constant human intervention. There are different levels of autonomy that can be roughly grouped into two functional categories:
- Goal-Seeking Agents: These agents are designed to complete a specific task. Once given an objective, the agent plans how to accomplish it; it autonomously plans each step, selects the right tools, analyzes the data, and generates a result, similar to generating a research report or solving a coding task.
- Autonomous Agents: These agents take autonomy a step further. They operate continuously within their environment, determining not only how to achieve goals (like the goal-seeking agent), but also which goals to pursue and when to act. They can revise their objectives based on new information and don’t need human prompting to begin. In essence, these agents are modern versions of business process management and are always optimizing and moving towards broader, system-level goals.
Q4. How do you ensure the coherence and the correctness of multiple data sources and third-party applications that are ingested into Agentic AI systems to independently analyze challenges, develop strategies and execute tasks?
There are a few ways to ensure the coherence and accuracy of external data and tools that are ingested into an agentic system:
- Source Validation and Trust Scoring: External data sources should be validated and assessed for trustworthiness. Ideally, your agent should only have access to trustworthy sources. Some systems apply dynamic confidence scores to each piece of information, weighing more reputable sources higher.
- Chain of Thought & Reasoning Transparency: The agent documents its reasoning steps during analysis and planning, allowing for self-reflection, error detection, and external audits.
- Cross Verification: The agent cross-checks information across multiple independent sources. The more sources that converge, the higher the agent’s confidence that the information is accurate.
- Citations: Ensure agents provide citations for where information was sourced.
- Human-in-the-loop: For high-risk or high-cost tasks, the system proposes a complete action plan but waits for human approval before executing these tasks.
Q5. How does Agentic AI learn patterns and relationships to determine the best approach to achieving an objective?
Agentic AI systems learn and determine optimal approaches to objectives in a few ways:
- Pre-trained Knowledge: Agents use LLMs, which are pre-trained on huge corpuses of multimodal data. Through this training, they learn patterns, how goals are typically approached, sequences of action, common obstacles, and successful strategies.
- Reasoning Through Planning (Chain of Thought): When facing a new objective, an agent uses reasoning methods to logically break down the task into a chain of subtasks. This helps the agent evaluate different approaches and select the most effective one.
- Self-Optimization: Agents can self-evaluate their approach to help identify patterns in problem solving that work well and remove weaker subplans.
- External Knowledge: Knowledge retrieved from vector databases, like KDB.AI, and data platforms, like kdb+, enable agents to make better and more informed decisions. These external sources contain information that agents do not initially know, so it is very important to include this information in planning and problem solving.
Q6. In your opinion what are the most significant use cases for the applicability of Agentic AI?
There are many use cases where Agentic AI can be applicable, but the use cases where I see much value are where agents act as a highly effective and autonomous assistant to humans.
- AI Powered Research Assistants: Researching dynamic and complex entities, like equities in financial markets, is traditionally a time-consuming process, often requiring days to analyze dozens or even hundreds of data sources to build a comprehensive view of a single equity within its market and economic context.
By leveraging Agentic AI — combined with high-performance analytics and AI databases like kdb+ and KDB.AI— this process can be transformed, reducing the time to generate detailed, data-rich research reports from days to just minutes. - Pattern Matching for Forecasting and Prediction: Agents equipped with advanced reasoning can combine deep learning model forecasts, patterns identified by sophisticated matching tools (like Temporal IQ), and unstructured contextual insights to build more accurate, dynamic trading strategies. This holistic approach enables smarter, faster decision-making based on a broader understanding of market scenarios.
- Autonomous Satellite Image Detection: In aerospace and defense, autonomous agent systems can rapidly analyze satellite imagery to detect evolving scenarios, such as the construction of new ports or changes on the battlefield. These agents can independently identify significant changes, generate detailed reports, and even propose potential courses of action, accelerating decision-making and enhancing situational awareness.
- Autonomous Production Line Optimization: Agentic AI systems can monitor manufacturing lines in real time, detect inefficiencies or equipment issues, and dynamically adjust workflows without human intervention. For example, an agent could identify early signs of machine failure, reroute production tasks, order replacement parts, and optimize workforce schedules — all to minimize downtime and keep operations on track. By analyzing operational data, maintenance logs, and external supply factors, agentic systems enable more resilient, adaptive, and cost-efficient manufacturing.
Q7. Agentic AI vs. Generative AI vs Deep Learning: What are the differences and similarities?
Deep learning is a subfield of machine learning that uses neural networks with multiple layers to learn complex patterns from large datasets. It enables a wide range of applications, including predictive modeling, classification, regression, and generation. Common deep learning architectures include Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), and transformers.
Generative AI refers to transformer-based deep neural network models that can generate new content, such as text, images, audio, video, and code, based on patterns learned from training data. Some examples include ChatGPT, Claude, Gemini, Llama, and Mistral.
Agentic AI is a framework that leverages Generative AI models to complete specific tasks. Agents are given instructions, a model, and a set of tools that enable them to plan and reason to complete a given task. Agentic systems can also be designed to operate autonomously, requiring little to no human intervention.
Q8. What technology is necessary for AI at scale?
There are several key technologies required to implement AI at scale:
- Graphic Processing Units (GPUs): High-performance chips that are designed for matrix operations and parallel processing. GPUs are necessary to train, finetune, and inference models at scale.
- High-Bandwidth Networking: Reducing latency for distributed model training and inference is necessary for large-scale AI implementation. High-bandwidth networking enables GPUs to ‘talk to each other’ with minimal latency. Technologies like InfiniBand and Ethernet with Remote Direct Memory Access (RDMA) can help facilitate this high-throughput, low latency communication between GPUs.
- Data Platform: When working with petabytes of data, not all data should be sent to the AI model, as it would be too costly and time-consuming. A high-performance data platform, like the KX AI database, acts to store and serve relevant information to the models as necessary. It handles production-scale, real-time and historical data while offering low-latency querying, multimodal RAG via vector search, transformation, and filtering.
- Model Frameworks: While AI models might be developed using frameworks like PyTorch, deploying them efficiently requires optimization tools and serving platforms, such as ONNX Runtime, TensorRT, and Triton Inference Server to prepare them for fast inference in real-world applications.
- Agentic Orchestration Frameworks: Agentic frameworks, like LangChain, AgentIQ, Pydantic, and CrewAI, enable AI agents to autonomously plan tasks, manage memory, and execute goals using integrated tools. They also help to orchestrate multi-agent implementations.
Q9. Let’s focus on GPUs to expedite parallel processing at a petabyte-scale. What’s the difference between AI accelerators and GPUs?
GPUs are general-purpose parallel processors, ideal for flexible deep learning training and large-scale inference, supported by mature ecosystems like CUDA and PyTorch. In contrast, AI accelerators are purpose-built for specific AI tasks, offering greater efficiency and lower latency for high-volume, fixed-model inference. While GPUs provide unmatched flexibility and broad compatibility, AI accelerators deliver optimized performance and cost savings for specialized workloads. It is also important to consider that some GPUs have custom cores designed for specific AI functionality, for example NVIDIA’s Tensor cores. Choosing the right mix depends on whether flexibility or task-specific optimization is the priority.
GPUs are selectively used within an agentic flow for tasks like multimodal embedding generation and LLM inference, where parallel processing is especially advantageous and Central Processing Units (CPUs) cannot keep up. Leveraging the strength of GPUs for use-cases like AI equity research assistants, real-time alpha & beta extraction, and satellite image detection is particularly beneficial, as they embed and analyze petabytes of data.
Q10. What are the tools to fully leverage GPUs for deep learning?
To fully leverage GPUs for deep learning, you need an integrated stack of tools that support high-performance training and efficient deployment. At the core is a deep learning framework, like PyTorch, which provides the ability to build and train neural networks and integrates with Compute Unified Device Architecture (CUDA) and GPUs. Underneath, CUDA enables fast parallel computation on NVIDIA GPUs, while libraries like TensorRT optimize models for low-latency, high-throughput inference. For scalable model serving, Triton Inference Server allows multiple models to run concurrently on shared GPU resources with batching, versioning, and monitoring support. Together, this stack ensures that models can be trained faster, deployed more efficiently, and scaled to meet demanding AI workloads.
……………………………………………..

Ryan Siegler, Data Scientist, KX
Ryan is a data scientist and technical leader specializing in AI technologies, including retrieval-augmented generation (RAG), large language models (LLMs), and vector databases. He also has extensive experience with enterprise chatbots, natural language processing, and training custom automatic speech recognition models.
Sponsored by KX