Over the past year, debates around AI agent autonomy have highlighted the challenge of balancing control and flexibility. Google’s recent Opal update offers a breakthrough with its no-code visual agent builder, introducing an “agent step” feature that transforms static workflows into dynamic, interactive experiences. Instead of hardcoding every step, Opal lets AI models like Gemini 3 Flash dynamically choose tools, plan routes, and interact with users to achieve goals. Key features include adaptive routing, persistent memory across sessions, and human-in-the-loop orchestration, marking a new era for enterprise agents. This update signals a move from rigidly programmed workflows to flexible goal-driven frameworks where models manage complexity, memory enhances user interactions over time, and humans can seamlessly intervene when needed. By packaging these advanced capabilities in a consumer product, Google provides a reference architecture for IT teams aiming to build smarter, more adaptable AI agents that integrate planning, tool use, memory, and collaboration effectively.