On AI Coding Assistants. Q&A with Ed Charbeneau
“Though AI speeds coding up, Devs still need to Master their craft”
Q1. Progress is claiming up to 30% productivity gains with the expanded AI coding assistants across Telerik and Kendo UI libraries. From your perspective as a developer advocate working directly with developers, what does that 30% actually mean in practical terms? Is it writing code faster, spending less time debugging, reducing context-switching, or something else entirely? Can you walk us through a real-world scenario where a developer using these AI assistants would tangibly feel that productivity boost in their daily work?
Productivity gains from the AI coding assistant are generally due to developers maintaining a flow state where a developer is fully engaged in a task, free from distractions. AI adds value by allowing developers to stay in their IDE and out of external resources. For example, if a developer is tasked with creating a new dashboard page. They can use a prompt that includes a mockup of the design or wireframe. In the prompt, they can ask the assistant to: “Create a page shown in the picture, use Telerik (or Kendo UI) components for applicable UI elements. Generate sample data if no data is available.” This simple instruction is enough to generate a working page without the developer needing to reference documentation, external samples, or demos. With the basic implementation complete, the developer can either manually connect data sources or iterate using the assistant by providing additional commands and context.
The reason such a simple prompt works in this scenario is due to the capabilities of LLMs. While your first instinct might be to explain what’s in the image, it’s not necessary. When an LLM is given an image, it will describe the elements found within the image and add the details to its context. Natural language processing, along with computer vision, is an extremely powerful tool when it is used to generate code.
Q2. AI coding assistants have become ubiquitous, but the challenge often lies in how well they understand specific frameworks, component libraries, and established patterns. What makes Progress’s approach to AI assistants for Telerik and Kendo UI different from using a general-purpose tool like GitHub Copilot or ChatGPT? How have you trained or tuned these assistants to understand the nuances of your component libraries, and what kinds of framework-specific guidance can they provide that a general LLM might miss or get wrong?
Dedicated AI assistants benefit from grounding built on additional context about specific Telerik or Kendo UI libraries. The context includes documentation, samples, and other data. When a general-purpose tool like GitHub Copilot is instructed to work with Telerik or Kendo UI components, the additional context is retrieved and added to the generalized AI models knowledge. It’s similar to how retrieval augmented generation (RAG) works. With RAG systems, generalized AI is provided additional context about highly specific data so generated answers are much more accurate and less prone to hallucination.
Q3. One of the persistent concerns with AI-generated code is quality, maintainability, and the risk of developers accepting suggestions they don’t fully understand. As you’ve worked with teams adopting these AI assistants, what patterns are you seeing around code review, testing, and ensuring that the productivity gains don’t come at the expense of code quality or technical debt? How should development teams approach governance and best practices when AI is generating significant portions of their UI code?
The generative AI coding space is rapidly evolving. Early adopters probably noticed issues with code quality. With the addition of agentic AI, assistants now perform checks through the compiler or other means to detect and fix errors. However, maintainability can still be a concern with agents as they can enable unsustainable code bases or adopt patterns from multiple resources. The result can be “mystery meat” code that doesn’t have a standardized set of best practices (it’s arguable non-AI code can suffer from the same problem).
Documentation is shifting from writing documentation about code for humans, to writing documentation for AI context. This means we are spending less time on traditional docs because GenAI can manage that task well. Instead, we’re writing prompt and context engineering documents. Advanced prompts outlining how a particular project should be structured, code guidelines, and context cues are how teams maintain quality at scale.
Q4. Telerik and Kendo UI serve developers across multiple frameworks—ASP.NET, Angular, React, Vue, Blazor, and more. From a technical standpoint, how do these AI assistants handle the complexity of providing framework-appropriate suggestions? For instance, if a developer is working in Blazor versus React, are the AI recommendations fundamentally different in approach, or is there a unified model that adapts? And what challenges did you encounter in making AI assistants work across such a diverse technology stack?
Currently our approach is to have a dedicated assistant for each technology stack. Because each front-end technology is seldom ever used together, there’s no crossover between AI assistants. Each framework-specific assistant behaves similarly, providing additional context about the product compared to generalized models. However, these agents will soon be part of a larger scope with the introduction of an orchestrator agent. The orchestrator is an agent that can task one or many specialized agents with tasks. Given that a developer wants to build a complete page that incorporates layout, adheres to a design system, and implements UI logic, the orchestrator can delegate. For each specialized task an assistant is dispatched to help. The “layout agent” will generate the layout, then the “UI assistant” will implement components, all that while the “design assistant” ensures the UI conforms to the desired design system’s theme.
Q5. Looking beyond the immediate productivity metrics, how do you see AI coding assistants fundamentally changing the developer experience and skill requirements for building enterprise applications with component libraries like Telerik and Kendo UI? Are we heading toward a future where developers need less deep knowledge of the underlying frameworks because AI can bridge those gaps, or is deep expertise becoming even more critical to properly guide and validate AI-generated code? What advice would you give to developers and teams who are trying to figure out how to integrate these AI tools into their workflows in a sustainable, value-creating way?
Software development, especially at the enterprise level, has always been about problem solving. Essentially the task is still the same, it’s how we scale that has changed. Developers still need to solve problems, however with AI they can complete a larger number of tasks quickly. Developers still need to understand complex and often abstract concepts. AI is a massive help on-boarding developers to new platforms and frameworks quickly.
Resources:
……………………………………………………….

Ed Charbeneau, Principal Developer Advocate, Tech Relations, Progress Software
Ed is a web enthusiast, speaker, writer, design admirer, and Developer Advocate for Telerik. He has designed and developed web based applications for business, manufacturing, systems integration as well as customer facing websites. Ed enjoys geeking out to cool new tech, brainstorming about future technology, and admiring great design. Ed’s latest projects can be found on GitHub.
Sponsored by Progress.