C# 14 introduces extension members | InfoWorld

Technology insight for the enterprise

C# 14 introduces extension members 12 May 2025, 3:47 pm

C# 14, a planned update to Microsoft’s cross-platform, general purpose programming language, adds an extension member syntax to build on the familiar feature of extension methods.

Extension members allow developers to “add” methods to existing types without having to create a new derived type, recompile, or otherwise modify the original type. The latest C# 14 preview, released with .NET 10 Preview 3, adds static extension methods and instance and static extension properties, according to Kathleen Pollard, principal program manager for .NET at Microsoft, in a May 8 blog post.

Extension members also introduce an alternative syntax for extension methods. The new syntax is optional, and developers do not need to change their existing extension methods. Regardless of the style, extension members add functionality to types. This is particularly useful if developers do not have access to the type’s source code or if the type is an interface, Pollard said. If developers do not like using !list.Any(), they can create their own extension method IsEmpty(). Starting in the latest preview, developers can make that a property and use it just like any other property of the type. Using the new syntax, developers also can add extensions that work like static properties and methods on the underlying type.

Creating extension members has been a long journey and many designs have been explored, Pollard said. Some needed the receiver repeated on every member; some impacted disambiguation; some placed restrictions on how developers organized extension members; some created a breaking change if updated to the new syntax; some had complicated implementations; and some just did not feel like C#, she said. The new extension member syntax preserves the enormous body of existing this-parameter extension methods while introducing new kinds of extension members, she added. It offers an alternative syntax for extension methods that is consistent with the new kinds of members and fully interchangeable with the this-parameter syntax. A general release of C# 14 is expected with .NET 10 in November 2025.

(image/jpeg; 4.19 MB)

What software developers need to know about cybersecurity 12 May 2025, 5:00 am

In 2024, cyber criminals didn’t just knock on the front door—they walked right in. High-profile breaches hit widely used apps from tech giants and consumer platforms alike, including Snowflake, Ticketmaster, AT&T, 23andMe, Trello, and Life360. Meanwhile, a massive, coordinated attack targeting Dropbox, LinkedIn, and X (formerly Twitter) compromised a staggering 26 billion records.

These aren’t isolated incidents—they’re a wake-up call. If reducing software vulnerabilities isn’t already at the top of your development priority list, it should be. The first step? Empower your developers with secure coding best practices. It’s not just about writing code that works—it’s about writing code that holds up under fire.

Start with the known

Before developers can defend against sophisticated zero-day attacks, they need to master the fundamentals—starting with known vulnerabilities. These trusted industry resources provide essential frameworks and up-to-date guidance to help teams code more securely from day one:

  • OWASP Top 10: The Open Worldwide Application Security Project (OWASP) curates regularly updated Top 10 lists that highlight the most critical security risks across web, mobile, generative AI, API, and smart contract applications. These are must-know threats for every developer.
  • MITRE: MITRE offers an arsenal of tools to help development teams stay ahead of evolving threats. The MITRE ATT&CK framework details adversary tactics and techniques while CWE (Common Weakness Enumeration) catalogs common coding flaws with serious security implications. MITRE also maintains the CVE Program, an authoritative source for publicly disclosed cybersecurity vulnerabilities.
  • NIST NVD: The National Institute of Standards and Technology (NIST) maintains the National Vulnerability Database (NVD), a repository of security checklist references, vulnerability metrics, software flaws, and impacted product data. 

Training your developers to engage with these resources isn’t just the best practice, it’s your first line of defense.

Standardize on secure coding techniques

Training developers to write secure code shouldn’t be looked at as a one-time assignment. It requires a cultural shift. Start by making secure coding techniques are the standard practice across your team. Two of the most critical (yet frequently overlooked) practices are input validation and input sanitization.

Input validation ensures incoming data is appropriate and safe for its intended use, reducing the risk of logic errors and downstream failures. Input sanitization removes or neutralizes potentially malicious content—like script injections—to prevent exploits like cross-site scripting (XSS).

Get access control right

Authentication and authorization aren’t just security check boxes—they define who can access what and how. This includes access to code bases, development tools, libraries, APIs, and other assets. This includes defining how entities can access sensitive information and view or modify data. Best practices dictate employing a least-privilege approach to access, providing only the permissions necessary for users to perform required tasks. 

Don’t forget your APIs

APIs may be less visible, but they form the connective tissue of modern applications. APIs are now a primary attack vector, with API attacks growing 1,025% in 2024 alone. The top security risks? Broken authentication, broken authorization, and lax access controls. Make sure security is baked into API design from the start, not bolted on later.

Assume sensitive data will be under attack

Sensitive data consists of more than personally identifiable information (PII) and payment information. It also includes everything from two-factor authentication (2FA) codes and session cookies to internal system identifiers. If exposed, this data becomes a direct line to the internal workings of an application and opens the door to attackers. Application design should consider data protection before coding starts and sensitive data must be encrypted at rest and in transit, with strong, current, up-to-date algorithms. Questions developers should ask: What data is necessary? Could data be exposed during logging, autocompletion, or transmission? 

Log and monitor applications

Application logging and monitoring are essential for detecting threats, ensuring compliance, and responding promptly to security incidents and policy violations. Logging is more than a check-the-box activity—for developers, logging can be a critical line of defense. Application logs should:

  • Capture user context to identify suspicious or anomalous activity,
  • Ensure log data is properly encoded to guard against injection attacks, and
  • Include an audit trail for all critical transactions.

Logging and monitoring aren’t limited to the application. They should span the entire software development life cycle (SDLC) and include real-time alerting, incident response plans, and recovery procedures.

Integrate security in every phase

You don’t have to compromise security for speed. When effective security practices are baked in across the development process—from planning and architecture to coding, deployment, and maintenance—vulnerabilities can be identified early to ensure a smooth release. Training developers to think like defenders while they build can accelerate delivery while reducing the risk of costly rework later in the cycle and result in more resilient software.

Build on secure foundations

While secure code is important, it’s only part of the equation. The entire SDLC has its own attack surface to manage and defend. Every API, cloud server, container, and microservice adds complexity and provides opportunities for attackers.

In fact, one-third of the most significant application breaches of 2024 resulted from attacks on cloud infrastructure while the rest were traced back to compromised APIs and weak access controls.

Worse still, attackers aren’t waiting until software is in production. The 2025 State of Application Risk report from Legit Security found that every organization surveyed had high or critical risks lurking in their development environments. The same report found that these organizations also had exposed secrets, with over one-third found outside of source code—in tickets, logs, and artifacts. What can you do? To reduce risk, develop a strategy to prioritize visibility and control across development environments, because attackers can strike during any phase.   

Manage third-party risk

So, you’ve implemented best practices across your development environment, but what about your supply chain vendors? Applications are only as secure as their weakest links. Software ecosystems today are interconnected and complex. Third-party libraries, frameworks, cloud services, and open-source components all represent prime entry points for attackers.

A software bill of materials (SBOM) can help you understand what’s under the hood, providing a detailed inventory of application components and libraries to identify potential vulnerabilities. But that’s just the beginning, because development practices can also introduce supply chain risk.

To reduce third-party risk:

  • Validate software as artifacts move through build pipelines to make sure it hasn’t been compromised.
  • Use version-specific containers for open-source components to support traceability.
  • Ensure pipelines validate code and packages before use, especially from third-party repositories.

Securing the software supply chain means assuming every dependency could be compromised.

Commit to continuous monitoring

Application security is a moving target. Tools, threats, dependencies, and even the structure of your teams evolve. Your security posture should evolve with them. To keep pace, organizations need an ongoing monitoring and improvement program that includes:

  • Regular reviews and updates to secure development practices,
  • Role-specific training for everyone across the SDLC,
  • Routine audits of code reviews, access controls, and remediation workflows, and
  • Penetration testing and red teaming, wherever appropriate.

Security maturity isn’t about perfection—it’s about progress, visibility, and discipline. Your development organization should never stop asking the question, “What’s changed, and how does it impact our risk?”

Security is no longer optional, but a core competency for modern developers. Invest in training, standardize your practices, and make secure coding second nature. Your applications—and your users—will thank you.

Jose Lazu is associate director of product at CMD+CTRL Security.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

(image/jpeg; 12.21 MB)

How to build (real) cloud-native applications 12 May 2025, 5:00 am

Cloud-native applications are increasingly the default way to deploy in both public clouds and private clouds.

But what exactly is a cloud-native application and how do you build one?

It’s important to start with first principles and define what cloud-native actually means. Like many technology terms, cloud-native is sometimes misunderstood, much like cloud computing itself was and continues to be in some respects. Simply hosting an application on a remote server doesn’t make it a cloud application. When it comes to cloud, the US National Institutes of Science and Technology established a formal definition of cloud computing in 2011 in Special Publication 800-145:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Cloud-native doesn’t simply mean something that was built for the cloud, though some might use the term to mean that. Cloud-native is a term that doesn’t have a NIST definition. But it does have a formal definition that was developed through an open source process under the Cloud Native Computing Foundation (CNCF). That definition is maintained at https://github.com/cncf/toc/blob/main/DEFINITION.md and states:

Cloud-native technologies and architectures typically consist of some combination of containers, service meshes, multi-tenancy, microservices, immutable infrastructure, serverless, and declarative APIs.

What are cloud-native applications?

You can run just about anything you want in the cloud. Take literally any application, create a virtual machine, and you’ll find a cloud host that can run it. That’s not, however, what cloud-native applications are all about.

Cloud-native applications are designed and built specifically to operate in cloud environments. It’s not about just “lifting and shifting” an existing application that runs on-premises and letting it run in the cloud.

Unlike traditional monolithic applications that are often tightly coupled, cloud-native applications are modular in a way that monolithic applications are not. A cloud-native application is not an application stack, but a decoupled application architecture.

Perhaps the most atomic level of a cloud-native application is the container. A container could be a Docker container, though really any type of container that matches the Open Container Interface (OCI) specifications works just as well. Often you’ll see the term microservices used to define cloud-native applications. Microservices are small, independent services that communicate over APIs—and they are typically deployed in containers. A microservices architecture allows for independent scaling in an elastic way that supports the way the cloud is supposed to work. 

While a container can run on all different types of host environments, the most common way that containers and microservices are deployed is inside of an orchestration platform. The most commonly deployed container orchestration platform today is the open source Kubernetes platform, which is supported on every major public cloud.

Key characteristics of cloud-native applications

CharacteristicDescription
Microservices architectureApplications broken into smaller, loosely coupled services that can be developed, deployed, and scaled independently
ContainerizationPackages microservices with dependencies, ensuring consistency across environments and efficient resource use
Orchestration platformProvides container deployment platform with integrated scaling, availability, networking, and management features
CI/CDAutomated pipelines for rapid code integration, testing, and deployment
Devops cultureCollaboration between dev and ops teams creates shared responsibility, faster cycles, and reliable releases
Scalability and resilienceDynamically scales resources based on demand and handles failures gracefully for high availability
Distributed system designServices operating across multiple servers enabling component-specific scaling, fault tolerance, and optimized resource utilization

Frameworks, languages, and tools for building cloud-native applications

Developing cloud-native applications involves a diverse set of technologies. Below are some of the most commonly used frameworks, languages, and tools.

Programming languages

  • Go: Developed by Google, Go is appreciated for its performance and efficiency, particularly in cloud services.
  • Java: A versatile language with a rich ecosystem, often used for enterprise-level applications.
  • JavaScript: Widely used for scripting and building applications as well as real-time services. 
  • Python: Known for its simplicity and readability, making it suitable for various applications, including web services and data processing.

Cloud-native containerization and orchestration

The core basic units of cloud-native application deployment are some form of containers and then a platform orchestration the running and management of those containers in the cloud.

Key technologies include the following:

Cloud-native development frameworks

Programming languages alone are often not enough for the development of larger enterprise applications. That’s where application development frameworks come into play. 

Popular cloud-native development frameworks include the following:

  • Django: Commonly used web framework for Python that has increasingly been used for cloud-native application development in recent years.
  • Micronaut: A full-stack framework for building cloud-native applications with Java.
  • Quarkus: Another framework created specifically to enable Java developers to build cloud-native applications.
  • .NET Aspire: Microsoft‘s open-source framework for building cloud-native applications with .NET.
  • Next.js: A React JavaScript framework that is particularly well-suited for building cloud-native web applications.
  • Node.js: A lean and fast JavaScript runtime environment with an event-driven, non-blocking I/O model.

Continuous integration and continuous deployment

Continuous CI/CD pipelines are essential components of cloud-native development, enabling automated testing, building and deployment of applications. 

Modern CI/CD tools integrate closely with container technologies and cloud platforms, providing integrated automation across the entire application lifecycle. These tools often implement practices like automated testing, canary deployments and blue-green deployments that reduce risk and accelerate delivery.

Among the commonly used tools are the following:

Observability and monitoring

Cloud-native applications require observability technology to provide insights into the behavior of distributed systems. This includes monitoring, logging, and tracing capabilities that provide a comprehensive view of application performance and health across multiple services and infrastructure components.

Tools that support the OpenTelemetry standard, along with platforms like Prometheus for metrics and Jaeger for distributed tracing, form the backbone of cloud-native observability.

Best practices for cloud-native application development

All of the major public cloud hyperscalers in recent years have developed best practices for cloud-native applications. The primary guidelines are often drafted under the name of the Well-Architected Framework.

The foundational principles behind the Well-Architected Framework help to ensure that cloud-native applications are secure, reliable, and efficient. The core principles include the folllowing:

  • Operational excellence: Monitor systems and improve processes.
  • Security: Implement strong identity and access management, data protection, and incident response.
  • Reliability: Design systems to recover from failures and meet demand.
  • Performance efficiency: Use computing resources efficiently.
  • Cost optimization: Manage costs to maximize the value delivered.

Cloud-native applications represent a fundamental shift in how organizations design, build, and deploy software. Rather than simply moving existing applications to cloud infrastructure as a virtual machine, the cloud-native approach embraces the cloud’s unique capabilities through architectural decisions that prioritize flexibility, resilience, and scale.

By embracing cloud-native principles, organizations position themselves to benefit from the full potential of cloud computing—not just as a hosting model, but as an approach to building applications that can evolve rapidly, operate reliably, and scale dynamically as usage requires.

(image/jpeg; 8.71 MB)

MySQL at 30: Still important but no longer king 12 May 2025, 5:00 am

This month MySQL turns 30. Once the bedrock of web development, MySQL remains immensely popular. But as MySQL enters its fourth decade, it ironically has sown the seeds of its own decline, especially relative to Postgres. Oracle, the steward over MySQL since 2010, may proclaim MySQL is “the world’s favorite database,” but that has been objectively false for a long time, as shown by developer sentiment surveys and popularity rankings from Stack Overflow and DB-Engines.

None of which is to deprecate MySQL’s importance. It was and is critical infrastructure for the web. But it’s no longer developers’ default database for most things. How did this happen?

For years, MySQL was the go-to database of the internet. Born as a lightweight, open source alternative to expensive commercial systems, MySQL made it easy to build on the web. It powered the rise of the LAMP (Linux, Apache, MySQL, PHP) stack. It was simple, fast, and free. But over time, the very things that made MySQL dominant came to constrain its growth. Its focus on simplicity made it easy to learn, but hard to evolve. Its permissive early design helped it spread fast but also left it ill-suited to modern, complex applications. Its dominant position left it less hungry for innovation than PostgreSQL, a database that has relentlessly closed gaps and added new capabilities.

The rise of MySQL in the web era

MySQL’s origin story is rooted in the early open source movement. In 1995, Swedish developer Michael “Monty” Widenius created MySQL as an internal project, releasing it to the public soon after. By 2000, MySQL was fully open sourced (GPL license), and its popularity exploded. As the database component of the LAMP stack, MySQL offered an irresistible combination for web developers: It was free, easy to install, and “good enough” to back dynamic websites. In an era dominated by expensive proprietary databases, MySQL’s arrival was perfectly timed. Web startups of the 2000s—Facebook, YouTube, Twitter, Flickr, and countless others—embraced MySQL to store user data and content. MySQL quickly became synonymous with building websites.

Early MySQL gained traction despite some trade-offs. In its youth, MySQL lacked certain “enterprise” features (like full SQL compliance or transactions in its default engine), but this simplicity was a feature, not a bug, for many users. It made MySQL blazingly fast for reads and simple queries and easier to manage for newcomers. Developers could get a MySQL database running with minimal fuss—a contrast to heavier systems like Oracle or even PostgreSQL at the time. “It’s hard to compete with easy,” I observed in 2022.

By the mid-2000s, MySQL was everywhere and was increasingly feature-rich. The database had matured (adding InnoDB, a more robust storage engine for transactions) and continued to ride the web explosion. Even as newer databases emerged, MySQL remained a default choice for millions of deployments, from small business applications to large-scale web infrastructure. As of 2025, MySQL is likely still the widest-deployed open source (or proprietary) database globally by sheer volume of installations. Scads of applications were written with MySQL as the backing store, and many remain in active use. In this sense, MySQL today is a bit like IBM’s DB2: a workhorse database with a massive installed base that isn’t disappearing, even if it’s no longer the trendiest choice.

Momentum shifts elsewhere

In the past decade, MySQL’s once-unquestioned dominance of open source databases has faced strong headwinds from both relatively new contenders (MongoDB, Redis, Elasticsearch) and old (Postgres). From my vantage point at MongoDB, I’ve seen a large influx of developers turn to MongoDB to more flexibly build web and other applications. But it’s Postgres that has become the “easy button” for developers who want to stick to SQL but need more capabilities than MySQL affords.

Whereas web developers in 2005 might have reached for MySQL for virtually any project, today they have a plethora of choices tailored to specific needs. Need a flexible JSON document store to support general-purpose database needs? MongoDB beckons. Building real-time analytics or full-text search? Elasticsearch could be a better fit. Looking for an in-memory cache or high-speed data structure store? Redis is there. Even in data analytics and data warehousing, cloud-native options such as Snowflake and BigQuery have taken off.

But it’s Postgres that can take the credit (or blame, if you prefer) for MySQL’s decline. The reasons for this are both technical and cultural. Postgres offers capabilities MySQL historically has not. Among them:

  • Richer SQL features and standards compliance: PostgreSQL has long prioritized SQL standards and advanced features. It supports complex queries, window functions, common table expressions, full-text search, and robust ACID (atomicity, consistency, isolation, durability) transactions, some of which MySQL lacked or added only later. Postgres can handle complex, enterprise-grade workloads without bending the rules.
  • Extensibility and flexibility: Postgres is highly extensible. You can define new data types, index types, and even write custom extensions or stored procedures in various languages. Whether it’s GIS/geospatial data (PostGIS), time-series extensions, or pgcrypto and pgvector extensions for crypto and AI use cases, Postgres can morph to fit needs. These extensibility hooks have let Postgres stay on the cutting edge, even when these extensions may offer demonstrably worse performance for modern applications. Postgres’ extensibility still shines compared to MySQL’s more limited plug-in model.
  • Open source, open culture: Both MySQL and Postgres are open source, but PostgreSQL’s license and governance are more permissive. Postgres is a true community-driven project, developed by a core global team and supported by many companies without a single owner. MySQL, by contrast, uses GPL (for the open version) and has been owned by Oracle for years. Oracle’s stewardship has been a double-edged sword. On one hand, Oracle has undoubtedly invested in MySQL’s development. The current MySQL 8.x series is a far cry from the MySQL of the 2000s. It’s a much more robust, feature-rich database (with improvements in replication, security, GIS, JSON support, and more) thanks in part to Oracle’s engineering resources. But that same tight control of MySQL engineering has altered the MySQL community dynamics in ways that arguably have slowed its momentum.

In short, PostgreSQL has convinced many that it offers more “future-proof” value than MySQL.

MySQL will persist

Despite all the challenges, MySQL will be with us for a long, long time. There are good reasons many developers and organizations stick with MySQL even as alternatives rise. First and foremost is MySQL’s track record of reliability at scale. It has proven itself capable of handling enormous workloads. The Facebooks and Twitters of the world did not outgrow MySQL so much as bend MySQL to their will through custom tools and careful engineering. If MySQL could power the data needs of a social network with billions of users, it can probably handle your e-commerce site or internal application just fine. That pedigree counts for a lot.

Secondly, MySQL remains simple and familiar to legions of developers. It’s often the first relational database new developers learn, thanks to its prevalence in tutorials and boot camps, and its integration with beginner-friendly tools. MySQL’s documentation is extensive, and its error messages and behaviors are well-known. In many cases, developers don’t need the advanced features of PostgreSQL, and MySQL’s lighter footprint (and yes, sometimes forgiving nature with SQL syntax) can make development feel faster. The old perception that “MySQL is easier” still lingers, even if PostgreSQL has improved its ease of use over the years. This familiarity creates inertia: Organizations have MySQL DBAs, MySQL backup scripts, and MySQL monitoring already in place. Switching is hard.

There’s also an ecosystem lock-in of sorts. Hundreds of popular web applications and platforms are built on MySQL (or its drop-in cousin MariaDB). For example, WordPress, which powers a huge portion of websites globally, uses MySQL/MariaDB as its database layer. Many other content management systems, e-commerce platforms, and appliances have a MySQL dependency. This entrenched base means MySQL continues to be deployed by default as people set up those tools. Even cloud providers, while they enthusiastically offer PostgreSQL, also offer fully managed MySQL services (often MySQL-compatible services such as Amazon Aurora) to cater to demand. In short, MySQL is deeply embedded in the infrastructure of the web, and that isn’t undone overnight.

A triumph of open source

However, the very reasons MySQL persists also threaten its future loyalty. MySQL’s widespread legacy use means it will remain relevant, but new projects are increasingly likely to choose something else, whether that’s PostgreSQL, MongoDB, Redis, or whatever you prefer. The risk for MySQL is that a new generation of developers may simply not develop the same attachment to it. Momentum matters in technology communities: PostgreSQL has it; MySQL a bit less so.

Additionally, if MySQL doesn’t keep up with new trends, it could see even loyal users exploring alternatives. For instance, when developers started caring about embeddings and vector search for AI applications, Postgres had an answer with pgvector, and MongoDB added Atlas Vector Search. MySQL had nothing comparable until very recently. MySQL’s continued evolution will be crucial to maintaining loyalty, and that again ties back to how Oracle and the MySQL community navigate the project’s direction in the coming years.

As MySQL turns 30, we should celebrate the incredible legacy of this open source database. Few software projects have had such a profound impact on an era of computing. MySQL empowered an entire generation of developers to build dynamic websites and applications, lowering the barrier to entry for startups and open source projects alike. MySQL demonstrated that open source infrastructure could compete with—and even surpass—proprietary solutions, reshaping the database industry’s economics. For that, MySQL will always deserve credit.

MySQL’s glory days might be behind it, but its story is far from over. The database world is better off for the 30 years of competition and innovation that MySQL inspired and continues to inspire.

(image/jpeg; 1.26 MB)

Visual Studio Code beefs up AI coding features 9 May 2025, 11:23 pm

Visual Studio Code 1.100, the latest release of Microsoft’s code editor, has arrived with several upgrades to its AI chat and AI code editing capabilities. Highlighting the list are support for Markdown-based instructions and prompt files, faster code editing in agent mode, and more speed and accuracy in Next Edit Suggestions.

Released May 8, Visual Studio Code 1.100, also known as the April 2025 release, can be downloaded for Windows, macOS, and Linux at code.visualstudio.com.

VS Code 1.100 allows developers to tailor their AI chat experience in the code editor to their specific coding practices and technology stack, by using Markdown-based files. Instructions files are used to define coding practices, preferred technologies, project requirements, and other custom instructions, while prompt files are used to create reusable chat requests for common tasks, according to Microsoft. Developers could create different instructions files for different programming languages or project types. A prompt file might be used to create a front-end component, Microsoft said.

The new VS Code release also brings faster AI-powered code editing in agent mode, especially in large files, due to the addition of support for OpenAI’s apply patch editing format and Anthropic’s replace string tool. The update for OpenAI is on by default in VS Code Insiders and gradually rolling out to Stable, Microsoft said, while the update for Anthropic is available for all users.

Visual Studio Code 1.100 introduces a new model for powering Next Edit Suggestions, intended to offer faster and more contextually relevant code recommendations. This updated model delivers suggestions with reduced latency and aligns more closely with recent edits, according to Microsoft. NES also now automatically can suggest adding missing import statements in JavaScript and TypeScript files.

With VS Code 1.100, the editor now provides links to additional information that explains why an extension identified as malicious was flagged. These “Learn More” links connect users to GitHub issues or documentation with details about the security concerns, helping users better understand potential risks. In addition, extension signature verification now is required on all platforms, i.e., Windows, macOS, and Linux. Previously, this verification was mandatory only on Windows and macOS. With this release, Linux now also enforces extension signature verification, ensuring that all extensions are properly validated before installation.

VS Code also features two new modes for floating windows. Floating windows in VS Code allow developers to move editors and certain views out of the main window into a smaller window for lightweight multi-window setups. The two new modes include Compact, in which certain UI elements are hidden to make more room for the actual content, and Always-on-top, in which the window stays on top of all other windows until a developer leaves this mode.

For source control, VS Code 1.100 adds quick diff editor decorations for staged changes. Developers now can view staged changes directly from the editor, without needing to open the Source Control view. For debugging, VS 1.100 features a context menu in the disassembly view.  

VS Code 1.100 follows VS Code 1.99, which was released April 3 with improvements for Copilot Chat and Copilot agent mode, along with the introduction of Next Edit Suggestions. VS Code 1.99 was followed by three point releases that addressed various bugs and security issues.

(image/jpeg; 4.74 MB)

GenAI isn’t taking software engineering jobs, but it is reshaping leadership roles 9 May 2025, 12:39 pm

Generative artificial intelligence (genAI) is reshaping the managerial responsibilities of software engineering leaders, according to Haritha Khandabattu, a senior director analyst at Gartner. Khandabattu said that while the technology is highly advanced, its primary function is to enhance team effectiveness and efficiency.

A Gartner survey of 400 software engineering team leaders found that up to half of their software development teams use genAI tools to augment their work, acting as a force multiplier rather than a replacement for human developers. “While genAI tools are highly advanced, their purpose is not to replace engineers,” Khandabattu said in a report released yesterday by Gartner.

Help for developers at all levels

GenAI can help experienced engineers adapt across different platforms and projects. Less experienced team members can also benefit from automating routine tasks, so they can concentrate on more complex challenges, Khandabattu said.

As software engineering leaders pilot and scale genAI tools, they should focus on demonstrating their tangible business value. “It’s not just about proving that AI can work. It’s about showing how it transforms teams to drive real business outcomes. By linking technology outcomes to business goals, they will build a compelling case for continued investment in their teams,” Khandabattu said.

Khandabattu reiterated that genAI is not a cost-cutting measure or a means of staff replacement but rather a potent ally in enhancing the efficiency of engineering teams.

Recruitment strategies evolve with genAI

The integration of genAI is also altering how software engineering leaders approach talent acquisition and management. Khandabattu said that traditionally time-consuming tasks such as summarizing interview feedback, writing job descriptions, and onboarding new hires can be streamlined through genAI. In fact, a Q4 2024 Gartner survey of 487 CIOs and IT leaders indicated that more than a third of respondents use AI to generate job descriptions.

Khandabattu said that genAI can expedite the hiring process by facilitating identifying top candidates. For instance, leaders can use genAI to conduct job analyses by inputting prompts like, “What are the top skills for a platform engineering manager?” While this provides a valuable starting point, Khandabattu cautioned that human review remains essential. Furthermore, AI-driven interview intelligence platforms can transcribe and summarize interviews, leading to significant time savings.

Onboarding processes can also be made more seamless with genAI. “AI-powered chatbots can assist new employees with FAQs and guide them through paperwork and training. This will enable them to get up to speed quickly and work on key projects sooner,” Khandabattu said. This accelerated onboarding allows new hires to become productive on key projects more quickly.

Generative AI Impacts the Responsibilities of a Software Engineering Leader

Generative AI impacts the responsibilities of a software engineering leader.

Gartner

Actions software engineering leaders should take now

To help their teams succeed in the era of genAI, Khandabattu outlined three areas software engineering leaders should prioritize:

  1. Strategic skill management and development: Khandabattu said that skill management and development are central to a leader’s responsibilities. “Software engineering leaders must upskill their teams in large language models (LLMs), prompt engineering, and more, so they can tackle new challenges.” They also must collaborate with human resources departments to develop tailored AI training programs.
  2. Cultivating a culture of learning: Fostering agile learning programs can lead to improved business outcomes, more adaptable employees, and a proactive plan to address evolving skill requirements, according to Khandabattu. “The idea is to develop each employee’s genAI skills ahead of demand.”
  3. Establishing new ethics policies: To implement clear AI ethics policies, software engineering leaders should define responsibilities across DevOps, DataOps, and ModelOps cycles. Khandabattu stressed the critical role of legal and security teams in these efforts. “There is a clear need to coordinate these cross-functional activities, ensuring accountability and smooth handoffs. Legal and security teams must also be involved in these efforts.”

(image/jpeg; 1.41 MB)

Cloud repatriation hits its stride 9 May 2025, 5:00 am

For the past decade, the cloud was the ultimate destination for forward-thinking IT leaders. Hyperscale providers sold a compelling promise: agility, scalability, and always-on innovation. CIOs pushed cloud-first mandates and for a time, moving workloads to AWS, Azure, or Google Cloud seemed like the most logical step for companies of every size.

But 2025 feels different. Repatriation—once a quiet undercurrent—has surged into the mainstream. The driving force behind this movement? Artificial intelligence. AI isn’t just another workload type. Its need for specialized compute, from GPUs to high-bandwidth networking and massive storage, has fundamentally challenged the economics that justified mass cloud migrations in the first place.

Don’t take my word for it, listen to cloud giant AWS. The New Stake reports:

In a recent U.K. Competition and Markets Authority (CMA) hearing, AWS challenged the notion that “once customers move to the cloud, they never return to on-premises.” They pointed to specific examples of customers moving workloads back to on-premises systems, acknowledging customers’ flexibility in their infrastructure choices. Despite hyperscalers’ earnings growing fast, there is a rising concern about the sustainability of that growth.

AI: A new budget superpower

Many enterprises are now confronting a stark reality. AI is expensive, not just in terms of infrastructure and operations, but in the way it consumes entire IT budgets. Training foundational models or running continuous inference pipelines takes resources of an order of magnitude greater than the average SaaS or data analytics workload. As competition in AI heats up, executives are asking tough questions: Is every app in the cloud still worth its cost? Where can we redeploy dollars to speed up our AI road map?

We’re witnessing IT teams pore over their cloud bills with renewed vigor. Brownfield apps with predictable usage are primarily under the microscope. Does it really make sense to pay premium cloud prices when legacy or colocation facilities can handle steady workloads at a fraction of the cost? For many, the answer is increasingly no, and those cloud resources are starting to find their way back home.

The hyperscalers aren’t oblivious. AWS, Microsoft, and Google are seeing their most sophisticated enterprise clients not just slow cloud migrations but actively repatriate workloads. These are often the workloads with the steadiest, most predictable resource profiles—the kind that are easiest to budget for when owned outright but hard to justify at on-demand, public cloud prices.

Simultaneously, a new breed of AI infrastructure providers is rising, offering bare metal, GPU-as-a-service, or colocation solutions purpose-built for machine learning. These platforms attract business by being more transparent, customizable, and affordable for enterprises tired of chasing discounts and deciphering complexity in hyperscaler pricing. The hyperscalers are responding with hybrid and multicloud offerings—even working to allow easier migration, better reporting, and more granular consumption-based pricing.

Still, there’s an acknowledgment in the boardrooms of Seattle and Silicon Valley: The easy growth is gone. Enterprises now want flexibility, especially when core business transformation depends on AI investment. Cloud providers must be more than arms-length landlords—they must become close partners, prepared to meet client workloads both on-prem and in the cloud, depending on what makes the most sense that quarter.

Navigating the hybrid cloud era

Repatriation doesn’t signal the end of cloud, but rather the evolution toward a more pragmatic, hybrid model. Cloud will remain vital for elastic demand, rapid prototyping, and global scale—no on-premises solution can beat cloud when workloads spike unpredictably. But for the many applications whose requirements never change and whose performance is stable year-round, the lure of lower-cost, self-operated infrastructure is too compelling in a world where AI now absorbs so much of the IT spend.

In this new landscape, IT leaders must master workload placement, matching each application to a technical requirement and a business and financial imperative. Sophisticated cost management tools are on the rise, and the next wave of cloud architects will be those as fluent in finance as they are in Kubernetes or Terraform.

Expect the next few years to feature:

  • Continued pressure on hyperscalers: Demands for transparency, flexible pricing, and hybrid support aren’t going away. Providers that don’t respond risk losing their best (and most profitable) enterprise customers.
  • The normalization of workload mobility: Moving between cloud and on-prem will become routine, not exceptional.
  • Budget reallocation at scale: Enterprises will double down on cost optimization not just to save money but to free up the resources AI demands.

AI isn’t just another line item—it’s the force reshaping cloud economics and triggering the widespread reconsideration of where and how enterprises run their most important workloads. In order to stay relevant, hyperscalers must evolve, offering realistic pricing and embracing hybrid. For CIOs, the new north star is optimization—of costs as well as of business value. Repatriation, once a tactical move, is now a strategic lever in a world where AI’s potential requires every available dollar and ounce of efficiency.

(image/jpeg; 0.37 MB)

7 application security startups at RSAC 2025 9 May 2025, 5:00 am

The RSAC Early Stage Expo, the innovation hub of RSAC 2025, was created to spotlight emerging players in the information security space. Among the dozens of startups packed into the second-floor booth area, these VC-backed newcomers in API and application security stood out.

Akto.io

Akto offers an API security platform that addresses key challenges across visibility, testing, and risk management. It begins with API discovery, shedding light on shadow and zombie APIs that often go unnoticed. Akto then automates API security testing (a process still manual in many organizations), streamlining vulnerability detection while also offering runtime threat protection. Finally, it provides API security posture management by identifying and prioritizing the most high-risk APIs within an application, helping teams focus their remediation efforts effectively. 

[ Related: RSA Conference 2025 – News and analysis ]

Akto enhances API monitoring by identifying vulnerabilities, assessing risk levels, and detecting potential exposure. It runs over a thousand test cases to uncover critical issues such as broken authentication or authorization flaws, and integrates seamlessly into CI/CD pipelines, enabling automated API security at every stage of development. The platform also leverages agentic AI to enhance API discovery, security testing, and posture management, reducing false positives, improving the depth and accuracy of results, and delivering more reliable and efficient security coverage throughout the development life cycle.

AppSentinels

AppSentinels is an API security platform that analyzes application workflows and activity across the full application life cycle. By understanding app workflows, it can test for vulnerabilities and defend against complex business logic attacks in production. The platform uses advanced AI models, including graph logic, unsupervised clustering, and state space models, to map application functionality and internal processes, enabling it to detect and block sophisticated threats.

AppSentinels CEO and co-founder Puneet Tutliani said the company protects 100 billion API calls each month and aims to scale to half a trillion API calls within the next four to six months. Product developments in the last year include enhanced and deeper business logic understanding of workflows (by leveraging test cases), continuous 24/7 penetration testing without a human in the loop (a longstanding challenge in API and application security), and runtime protection that works both out-of-band and inline. Tutliani said that business and monetary fraud is currently the top concern for their clients, and AppSentinels plans to dedicate more resources in this area.

Aurva

The Aurva security platform secures sensitive data at run time, focusing on how data is used, who accesses it, and where it flows, both inside and outside of the organization. It maps data activity in real time, combining model-layer AI security, database activity monitoring, runtime data security posture management, and data flow monitoring to provide visibility into access patterns and data movement.

For non-Windows systems, Aurva uses eBPF to monitor data packets without being in-line, enabling high-speed, low-latency performance. For Windows environments, it uses custom lightweight agents powered by Agentix to deliver similar functionality. Processing over a billion queries daily for some customers, Aurva offers comprehensive insight into data access and flows across complex environments while ensuring minimal impact on system performance.

Escape

Escape is a dynamic application security testing (DAST) platform purpose-built to detect and prioritize complex business logic vulnerabilities, issues that traditional tools often miss. Rather than focusing solely on surface-level flaws like missing headers, Escape helps organizations identify, triage, and remediate deeper vulnerabilities such as broken object level authorization, insecure direct object references, and access control issues.

Escape identifies API endpoints through multiple sources: analyzing exposed web code, crawling domains using its custom spider, and integrating directly with repositories on GitHub and GitLab to discover APIs from source code. Once APIs are discovered, Escape generates a wide array of attack scenarios, ranging from classic vulnerabilities like SQL injection or man-in-the-middle attacks to advanced business logic exploits. The platform then prioritizes findings based on their business impact, using a severity matrix that factors in traditional cybersecurity scores, exploitability, and environment-specific risk.

To accelerate remediation, Escape provides code snippets tailored to each development framework, enabling faster fixes by developers and aligning with modern DevSecOps workflows, reducing friction between security and engineering teams.

Raven

Raven brings a runtime-first approach to application security, enabling organizations to analyze their code in production and de-prioritize up to 97% of open-source vulnerabilities that pose no real risk. Raven analyzes code at the functional level in real time, identifying only those vulnerabilities that are truly exploitable in the application’s runtime context. At the core of the Raven platform are proprietary eBPF sensors that observe the entire stack, from the operating system to the application layer, without requiring code injection or instrumentation. These sensors trace which libraries and functions are actually in use, reducing noise and revealing the true risk profile.

Raven also employs an agentic AI system, supported by expert engineers, to pre-analyze vulnerable functions across open-source libraries. This enables library-level risk assessment when cross-referenced with a customer’s live application behavior. Transitive dependencies, often hidden but equally dangerous, are also tracked and analyzed within Raven’s runtime dependency graph, helping identify deep-rooted vulnerabilities. Raven also provides suggested remediations after finding these vulnerabilities, and includes runtime detection and response capabilities. It can detect runtime anomalies early, the company said, allowing security teams to respond faster to emerging threats.

Seal Security

Seal Security streamlines open-source vulnerability patching by making the latest security fixes backwards compatible with older library versions. These standalone patches are integrated into the build process, allowing developers to automatically address vulnerabilities without chasing updates and reducing coordination time between development and security teams. CEO and co-founder Itamar Sher said that the company has focused on two additional areas beyond application security in the past year: securing open-source operating systems and securing container images. All three are now combined into the Seal Security package.

If you have a security patch for your OS that Seal detects, you just have to press a single button to deploy the latest patch applicable to your specific environment, Sher said. Seal makes sure that all of the open-source components that are part of your build chain are secure, and come from a secure source. Seal commits to customers that they can take a container base image and make a vulnerability-free version of it within three days. In addition, Seal Security has expanded its support of programming languages in the past year from five languages to eight including Java, C# (.NET), Python, JavaScript, C, C++, PHP, and Ruby.

Seezo

Seezo addresses application security even before developers start coding with an AI-powered security design review (SDR) platform. Seezo automates the traditionally manual and resource-heavy process of conducting security design reviews for every new feature the engineering team builds, before they build it, helping to shift security even further left in the software development life cycle.

Instead of relying on scarce App Sec personnel (the industry average is just two security professionals for every 100 developers), Seezo uses AI to analyze design documents, Jira tickets, product requirement documents, and architectural diagrams. From this context, it generates tailored security requirements for developers before a single line of code is written. This early intervention dramatically reduces the number of vulnerabilities introduced later in the development pipeline, according to the company. Where manual security reviews currently cover only 10% to 15% of new features, Seezo aims to scale this coverage to 100%, without requiring teams to grow exponentially.

Seezo is LLM-agnostic, prioritizing performance to ensure its solution remains flexible and efficient across SaaS and on-premise deployments. By automating the generation of contextual security guidance at the design stage, Seezo helps developers to build securely from day one, bridging the gap between product design and secure implementation.

(image/jpeg; 6.62 MB)

Python popularity climbs to highest ever – Tiobe 8 May 2025, 8:11 pm

Python continues to soar in the Tiobe index of programming language popularity, rising to a 25.35% share in May 2025. It’s the highest Tiobe rating for any language since 2001, when Java topped the chart.

Python’s popularity increased roughly 2.2 percentage points in the past month; the language had a rating of 23.08% in April. Python also racked up the largest lead a language has ever had, running 15 percentage points ahead of the second most popular language, C++, which has a 9.94% rating. “The only reason other languages still have a reason for existing is because of Python’s low performance, and the fact that it is interpreted and thus prone to unexpected run-time errors,” said Tiobe CEO Paul Jansen, who had noted Python’s “serious drawbacks” previously. “This means that safety-critical and/or real-time systems still have to rely on other languages, but in most other domains Python is slowly but surely finding its way to the top.”

Python’s top ranking today carries more weight than Java’s almost 24 years ago, Jansen noted. In June 2001, Java’s rating was 26.49% and in October 2001 it was 25.68%. But Tiobe only tracked 20 different programming languages back then, versus 282 languages today. ”Hence, it was easier to get such a high score in 2001,” Jansen said.

Software quality services provider Tiobe rates programming language popularity based on the number of skilled engineers worldwide, courses, and third-party vendors, using popular websites such as Google, Wikipedia, Bing, Amazon, and more than 20 others to calculate the ratings.

The Tiobe index top 10 for May 2025:

  1. Python, with a rating of 25.35%
  2. C++, 9.94%
  3. C, 9.71%
  4. Java, 9.31%
  5. C#, 4.22%
  6. JavaScript, 3.68%
  7. Go, 2.7%
  8. Visual Basic, 2.62%
  9. Delphi/Object Pascal, 2.29%
  10. SQL, 1.9%

The rival Pypl Popularity of Programming Language Index ranks language popularity by analyzing how often languages are searched on Google.

The Pypl index top 10 for May 2025:

  1. Python, with a share of 30.41%
  2. Java, 15.12%
  3. JavaScript, 7.93%
  4. C/C++, 6.98%
  5. C#, 6.09%
  6. R, 4.59%
  7. PHP, 3.71%
  8. Rust, 3.09%
  9. TypeScript, 2.8%
  10. Objective-C, 2.76%

(image/jpeg; 0.45 MB)

Sizing up the AI code generators 8 May 2025, 5:00 am

Every developer has now pasted code into ChatGPT or watched GitHub Copilot autocomplete a function. If that’s your only exposure, it’s easy to conclude that coding with large language models (LLMs) isn’t “there yet.” In practice, model quality and specialization are moving so fast that the experience you had even eight weeks ago is already out of date. OpenAI, Anthropic, and Google have each shipped major upgrades this spring, and OpenAI quietly added an “o-series” of models aimed at reasoning.

Below is a field report from daily production use across five leading models. Treat it as a snapshot, not gospel—by the time you read this, a point release may have shuffled the rankings again.

OpenAI GPT-4.1: UI whisperer, not my main coder

OpenAI’s GPT-4.1 replaces the now-retired GPT-4.5 preview, offering a cheaper, lower-latency 128k-token context and better image-to-spec generation. It’s still solid at greenfield scaffolding and turning screenshots into code, but when the task is threading a fix through a mature code base, it loses track of long dependency chains and unit-test edge cases.

When to call it: Design-system mock-ups, API documentation drafts, converting UI comps into component stubs.
When to skip it: After your initial scaffold.

Anthropic Claude 3.7 Sonnet: The dependable workhorse

Anthropic’s latest Sonnet model is still the model I reach for first. It strikes the best cost-to-latency balance, keeps global project context in its 128k window, and rarely hallucinates library names. On tough bugs, it sometimes “cheats” by adding what it calls “special case handling” to the code under test (watch for if (id==='TEST_CASE_1 data')-style patches). Sonnet also has a habit of disabling ESLint or TypeScript checks “for speed,” so keep your linter on.

Sweet spot: Iterative feature work, refactors that touch between five and 50 files, reasoning over build pipelines.
Weak spot: Anything visual, CSS fine-tuning, unit test mocks.
Tip: grep your code for the string “special case handling”.

Google Gemini 2.5 Pro-Exp: The UI specialist with identity issues

Google’s Gemini 2.5 release ships a one-million-token context (two million promised) and is currently free to use in many places (I’ve yet to be charged for API calls). It shines at UI work and is the fastest model I’ve used for code generation. The catch: If your repo uses an API that changed post-training, Gemini may argue with your “outdated” reality—sometimes putting your reality in scare quotes. It also once claimed that something in the log wasn’t possible because it occurs in the “future.”

Use it for: Dashboards, design-system polish, accessibility passes, quick proof-of-concept UIs.
Watch out for: Confident but wrong API calls and hallucinated libraries. Double-check any library versions it cites.

OpenAI o3: Premium problem solver, priced accordingly

OpenAI’s o3 (the naming still confuses people who expect “GPT”) is a research-grade reasoning engine. It chains tool calls and writes analyses, and it will pore over a 300-test Jest suite without complaint. It is also gated (I had to show my passport for approval), slow, and costly. Unless you’re on a FAANG-scale budget or you’re unable to resolve a bug yourself, o3 is a luxury, not a daily driver.

OpenAI o4-mini: The debugger’s scalpel

The surprise hit of April is o4-mini: a compressed o-series variant optimized for tight reasoning loops. In practice, it’s 3-4× faster than o3, still expensive via the OpenAI API, but throttled “for free” in several IDEs. Where Claude stalls on mocked dependencies, o4-mini will reorganize the test harness and nail the bug. The output is terse, which is surprising for an OpenAI model (https://openai.com/index/sycophancy-in-gpt-4o/).

Great for: Gnarly generics, dependency injection edge cases, mocking strategies that stump other models.
Less ideal for: Bulk code generation or long explanations. You’ll get concise patches, not essays.

Multi-model workflow: A practical playbook

  1. Explore UI ideas in ChatGPT using GPT-4.1. Drop your slide deck and ask it to generate mockups. Remind your code generator that DALL-E does some weird things with words.
  2. Create your initial specification with Claude in thinking mode. Ask another LLM to critique it. Ask for an implementation plan in steps. Sometimes I ask o4-mini if the spec is enough for an LLM to follow in a clean context.
  3. Scaffold with Gemini 2.5. Drop sketches, gather the React or Flutter shell, and the overall structure.
  4. Flesh out logic with Claude 3.7. Import the shell, have Sonnet fill in the controller logic and tests.
  5. Debug or finish the parts Claude missed with o4-mini. Let it redesign mocks or type stubs until tests pass.

This “relay race” keeps each model in its lane, minimizes token burn, and lets you exploit free-tier windows without hitting rate caps.

Final skepticism (read before you ship)

LLM coding still demands human review. All four models occasionally:

  • Stub out failing paths instead of fixing root causes.
  • Over-eagerly install transitive dependencies (check your package.json).
  • Disable type checks or ESLint guards “temporarily.”

Automated contract tests, incremental linting, and commit-time diff review remain mandatory. Treat models as interns with photographic memory. They’re excellent pattern matchers, terrible at accountability. (Author’s note: Ironically, o3 added this part when I asked it to proofread but I liked it so much I kept it.)

Bottom line

If you tried GitHub Copilot in 2024 and wrote off AI coding, update your tool kit. Claude 3.7 Sonnet delivers day-to-day reliability, Gemini 2.5 nails front-end ergonomics, and o4-mini is the best pure debugger available—provided you can afford the tokens or you have a lot of patience. Mix and match. You can always step in when a real brain is required.

(image/jpeg; 8.18 MB)

Running PyTorch on an Arm Copilot+ PC 8 May 2025, 5:00 am

When Microsoft launched its Copilot+ PC range almost a year ago, it announced that it would deliver the Copilot Runtime, a set of tools to help developers take advantage of the devices’ built-in AI accelerators, in the shape of neural processing units (NPUs). Instead of massive cloud-hosted models, this new class of hardware would encourage the use of smaller, local AI, keeping users’ personal information where it belonged.

NPUs are key to this promise, delivering at least 40 trillion operations per second. They’re designed to support modern machine learning models, providing dedicated compute for the neural networks that underpin much of today’s AI. An NPU is a massively parallel device with a similar architecture to a GPU, but it offers a set of instructions that are purely focused on the requirements of AI and support the necessary feedback loops in a deep learning neural network.

The slow arrival of the Copilot Runtime

It’s taken nearly a year for the first tools to arrive, much of them still in preview. To be fair, that’s not surprising, considering the planned breadth of the Copilot Runtime and the need to deliver a set of reliable tools and services. Still, it’s taken longer than Microsoft initially promised.

Some of the holdup was due to problems associated with providing runtimes for the Qualcomm Hexagon NPU, though most of the delay stemmed from the complexity of delivering the right level of abstraction for developers when introducing a new set of technologies.

One of the last pieces of the Copilot Runtime to arrive rolled out a few weeks ago, an Arm-native version of the PyTorch machine learning framework, as part of the PyTorch 2.7 release. With much of the publicity around AI during the past couple of years focusing on transformer-based large language models, there’s still a lot of practical work that can be delivered using smaller, more targeted neural networks for everything from image processing to small language models.

Why PyTorch?

PyTorch provides a set of abstractions and features that can help build more complex models, with support for tensors and neural networks. Tensors make it easy to work with large multidimensional arrays, a key tool for neural network–based machine learning. At the same time, PyTorch also provides a basic neural network model that can both define and train your machine learning models, with the ability to manage forward passes through the network.

It’s a useful tool, as it’s used by open source AI model services such as the Hugging Face community. With PyTorch you can quickly write code that lets you experiment with models, allowing you to quickly see how changes in parameters, tuning, or training data affect outputs.

You can start by using its core primitives to define the layers in a neural net and see how data flows through the network. This allows you to start building a machine learning model, adding a training loop using back propagation to refine model parameters, comparing output predictions against a test data set to track how the model is learning. Meanwhile you can use tensors to process data sets for use in the neural network, for example, processing the data used to make up an image. Once trained, models can be saved and loaded and used to test inferencing.

Bringing PyTorch to Arm

With Copilot+ PCs at the heart of Microsoft’s endpoint AI development strategy, they need to be as much a developer platform as an end-user device. As a result, Microsoft has been delivering more and more Arm-based developer tools. The latest is a set of Arm-native builds of PyTorch and its LibTorch libraries. Sadly, these builds don’t yet support Qualcomm’s Hexagon NPUs, but the Snapdragon X processors in Arm-based Copilot+ PCs are more than capable enough for even relatively complex generative AI models.

Tools are already in place for consuming local AI models: the APIs in the Windows App SDK, ONNX model runtimes for the Hexagon NPU, and support in Direct ML. Adding an Arm version of PyTorch fills a big gap in the Arm Windows AI development story. Now you can go from model to training to tuning to inferencing to applications without leaving your PC or your copy of Visual Studio (or Visual Studio Code). All the tools you need to build, test, and package endpoint AI applications are now Arm-native, so there’s no need to worry about the overheads that come with Windows’ Prism x64 emulation.

So, how do you get started with PyTorch on an Arm-based PC?

Installing PyTorch on Windows on Arm

I tried it out using a seventh-generation Surface Laptop with a 12-core Qualcomm X Elite processor and 16GB of RAM. (Although it worked, it showed an interesting gap in Microsoft’s testing: The chipset I used was not in the headers for the code used to compile PyTorch.) Like most development platforms, it’s a matter of getting your toolchain in place before you start coding, so be sure to follow the directions in the announcement blog post.

As PyTorch depends on compiling many of its modules as part of installation, you need to have installed the Visual Studio Build Tools, with support for C++, before installing Python. If you’re using Visual Studio, make sure you’ve enabled Desktop Development with C++ and installed the latest Arm64 build tools. Next, install Rust, using the standard Rust installer. This will automatically detect the Arm processor and ensure you have the right version.

With all the prerequisites in place, you can now install the Arm64 release of Python from Python.org before using the pip installer to install the latest version of PyTorch. This will download the Arm versions of the binaries and compile and install any necessary components. It can take some time, so be prepared to wait. If you prefer to use the C++ PyTorch tool, you can download an Arm-ready version of LibTorch.

Getting the right version of LibTorch can be confusing, and I found it easiest to use the link in the Microsoft blog post to download the nightly build, as this goes straight to an Arm version. The library comes as a ZIP archive, so you will need to install it alongside your C++ PyTorch projects. I decided to stick with Python, so I didn’t install LibTorch on my development Arm laptop.

Running AI models in PyTorch on Windows

You’re now ready to start experimenting with PyTorch to build, train, or test models. Microsoft provided some sample code as part of its announcement, but I found its formatting didn’t copy to Visual Studio Code, so I downloaded the files from a linked GitHub repository. This turned out to be the right choice, as the blog post didn’t include the essential requirements.txt file needed to install necessary components.

The sample code downloads a pretrained Stable Diffusion model from Hugging Face and then sets up an inferencing pipeline around PyTorch, implementing a simple web server and a UI that takes in a prompt, lets you tune the number of passes used, and sets the seed used. Generating an image takes 30 seconds or so on a 12-core Snapdragon X Elite, with the only real constraint being available memory. You can get details of operations (and launch the application) from the Visual Studio Code terminal.

It’s possible that performance could be improved if Microsoft added the Surface Laptop’s processor to the header files used to compile the PyTorch Python libraries. An error message at launch shows that the SOC specification is unknown, but the application still runs—and Task Manager says that it is a 64-bit Arm implementation.

Running a PyTorch inference is relatively simple, with only 35 lines of code needed to download the model, load it into PyTorch, and then run it. Having a framework like this to test new models is useful, especially one that’s this easy to get running.

Although it would be nice to have NPU support, that will require more work in the upstream PyTorch project, as it has been concentrating on using CUDA on Nvidia GPUs. As a result, there’s been relatively little focus on AI accelerators at this point. However, with the increasing popularity of silicon like Qualcomm’s Hexagon and the NPUs in the latest generation of Intel and AMD chip sets, it would be good to see Microsoft add full support for all the capabilities of its and its partners’ Copilot+ PC hardware.

It’s a good sign when we want more, and having an Arm version of PyTorch is an important part of the necessary endpoint AI development toolchain to build useful AI applications. By working with the tools used by services like Hugging Face, we’re able to try any of a large number of open source AI models, testing and tuning them on our data and on our PCs, delivering something that’s much more than another chatbot.

(image/jpeg; 15.16 MB)

SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools 8 May 2025, 3:15 am

At its Innovate conference, SAS on Wednesday announced a series of offerings across its portfolio that focus on AI in all its forms. They ranged from a series of enhancements to its Viya platform to a set of new custom AI models aimed at specific sectors, plus some governance resources to help organizations reduce risk from the technology.

SAS Viya and AI agents

The company announced new and enhanced components for the SAS Viya platform that are aimed at both developers and end users.

SAS Data Maker, a synthetic data generator which, SAS said, helps organizations tackle data privacy and scarcity challenges, has been enhanced with technology from the company’s recent acquisition of synthetic data pioneer Hazy. It looks at a source dataset that will be used as a basis for the synthetic data, automatically figures out the structure, and produces an entity relationship map that can then be tweaked and used to train the generative model to create high-quality synthetic data.

With the current release, it has been upgraded to produce multi-table data, and time series data. Data Maker has been in private preview, but is soon moving to public preview, and is expected to be generally available in the third quarter of this year.

SAS Viya Intelligent Decisioning, available now, helps users build and deploy intelligent AI agents via a low-code/no-code tool, with what the company describes as “The just right AI autonomy to human involvement ratio to strike the optimal oversight balance for task complexity, risk, and business goals.” For example, an agent vetting mortgage applications can flag specific denials for human review; the human can then query the agent to explore its thinking and make the final decision.

SAS Viya Copilot, now in private preview, is built on Microsoft Azure AI Services. It is an AI-driven conversational assistant embedded directly into the SAS Viya platform, to give developers, data scientists, and business users alike a personal assistant that accelerates analytical, business, and industry tasks. The initial Copilot offering in Model Studio includes AI-powered model development and code assistance for SAS users. It will be generally available in the third quarter.

Initially released in 2024, SAS Viya Workbench receives support for R language coding, SAS Enterprise Guide as an optional integrated development environment (IDE), and is now available on Microsoft Azure Marketplace as well as AWS Marketplace.

“While these updates may not be groundbreaking, they integrate features with built-in governance and ready-to-use models, crucial for enterprise use,” said Robert Kramer, VP & principal analyst, Enterprise Data, ERP & SCM, Moor Insights & Strategy. “Customers may be able to benefit from faster onboarding, easier collaboration, and more secure AI development, especially in regulated industries where auditability and model transparency matter.”

Prebuilt models

AI was also the basis for several other new SAS offerings.

In addition to Intelligent Decisioning, the company introduced six custom AI models to address specific processes in various industries.

“Our customer space is actually divided into two segments,” said VP of Applied AI and Modeling Udo Sglavo, during a media briefing. “One, they do have data science teams. The data science teams most of the time do an awesome job in their domain. The other camp, which is actually a little bit bigger, is they don’t have any data science resources, so they don’t know how to get started. We believe that with SAS models, we are addressing the needs of both segments of our market.”

He pointed out that, thanks to the models, companies with data science teams can shift their focus to strategic questions for the company, and leave the other questions to the model. For companies that don’t have a data science team, he said, “It’s a quick way to get started and see the impact of AI on your business models right away. So once again, value from the first day.”

Two of the models, AI-driven Entity Resolution and Document Analysis, are suitable for many industries. The Medication Adherence Risk model targets healthcare, Strategic Supply Chain Optimization serves manufacturing, and the final two, Payment Integrity for Food Assistance and Tax Compliance for Sales Tax, are for the public sector.

Later this year, four more models will join the group: Fraud Decisioning for Payments and Card Models for banking, Payment Integrity for HealthCare for healthcare, Worker Safety Monitoring for manufacturing, and Tax Compliance for Individual Income Tax for the public sector.

These models, said Sglavo, are lightweight and easy to deploy. “They basically live in a container, which you can plug into any ecosystem, just like SAS Viya. And you can get them up and running right away.” And, he added, “they are built around real-world industry cases, so we are turning practical use cases into software which you can productionize right away and get value right away.”

Kramer agreed. “The availability of pre-built AI models for applications such as fraud detection, supply chain planning, and health risk assessment should help organizations accelerate their AI adoption by providing ready-to-use solutions,” he said.

Governance

As AI becomes more pervasive in the enterprise, it also presents risk.

“It’s important that we’ve got a way to assess intended use and expected outcomes before deployment, and that we’ve got the ability to monitor for ongoing compliance,” noted Reggie Townsend, VP of the data ethics practice at SAS. “Now, this is a matter of oversight, for sure. This is a matter of operations, and this is a matter of organizational culture. And all of these things combined are what represent this new world of AI governance, where there’s a duality that exists between these conveniently accessible productivity boosters that the team has been talking about this morning, but we’re intersecting with inaccuracies and inconsistency and potential intellectual property leakage.”

To that end, SAS has introduced new governance resources to help organizations assess their current AI governance maturity in four essential areas: oversight, compliance, operations, and culture. Known as the AI Governance Map, the tool is the latest addition to the company’s suite of governance products. And more are on the way; SAS has announced an upcoming product, designed for executives, that it described as “a unified holistic AI governance solution” able to aggregate, orchestrate, and monitor AI systems, models, and agents.

Abhishek Punjani, research analyst – AI at Info-Tech Research Group, approved of the direction SAS has taken. “In the race to innovate with AI, many organizations made a fundamental misstep early on in their AI journey by putting innovation and speed above control, sometimes at the cost of long-term resilience,” he noted. “However, the tide is beginning to swing in a more responsible and balanced direction. With its latest and agentic AI innovations, SAS is at the forefront of the industry’s movement toward a more balanced and responsible path forward. Through a combination of ethical calibrations, SAS intends to create enterprise value through AI systems that are impactful and ethically sound, allowing for tailored levels of human oversight and intervention.”

Punjani also liked the approach taken with the rest of the SAS announcements.

“Building on this governance-first philosophy, SAS has also expanded its Viya platform with a suite of tools aimed at practical AI enablement,” he said. “SAS Data Maker addresses one of the most prominent issues in AI today, data scarcity and privacy, by generating secure synthetic data for safe model training. SAS Viya Intelligent Decisioning enables organizations to build AI agents with customized human involvement, allowing users to embed policy, logic, and rules into their AI agents for adaptive actions.”

“Together, these solutions mark a shift toward more grounded, enterprise-ready AI,” he said. “Rather than chasing scale alone, they reflect a growing focus on control and accountability, qualities that are becoming essential as AI becomes central to important business operations. As more organizations look for ways to move beyond experimentation, approaches like SAS’, which build governance and flexibility into the product itself, are reshaping what mainstream AI adoption looks like. It’s a reminder that the next phase of AI adoption won’t be driven by scale alone, but by how well these systems can integrate into business processes with clarity.”

(image/jpeg; 5.86 MB)

Node.js 24 drops MSVC support 7 May 2025, 11:10 pm

Node.js 24 has been released. The latest version of the open-source, cross-platform JavaScript runtime upgrades the Google V8 JavaScript engine to version 13.6 and the NPM package manager to version 11. Node.js 24 also drops support for MSVC, Microsoft’s C/C++ compiler, and ClangCL is now required to compile Node.js on Windows.

Introduced on May 6 as the “Current” release of Node.js, Node.js 24 enters long-term support status in October. It can be downloaded from nodejs.org.

The Node.js 24 release features V8 13.6, with new JavaScript features including Float16Array, explicit resource management, and WebAssembly Memory64 support, which adds support for 64-bit memory indexes to WebAssembly. NPM 11, meanwhile, offers performance and security improvements and better compatibility with modern JavaScript packages. With the removal of support for MSVC, ClangCL is now required to compile Node.js on Windows.

In other changes in Node.js 24:

  • AsyncLocalStorage now uses AsyncContextFrame by default, providing a more efficient implementation of asynchronous context tracking. This improves performance and makes the API more robust for advanced use cases.
  • The URLPattern API now is exposed on the global object; this makes it easier to use without explicit imports. The API provides pattern matching system for URLs, similar to how regular expressions work for strings.
  • The test runner module now waits automatically for subtests to finish, eliminating the need to manually await test promises. This makes writing tests more intuitive and reduces common errors related to unhandled promises.

(image/jpeg; 7.91 MB)

JDK 25: The new features in Java 25 7 May 2025, 4:22 pm

Java Development Kit (JDK) 25, a planned long-term support release of standard Java due in September, now has six features officially proposed for it. The latest feature is a fifth preview of structured concurrency.

Separate from the official feature list, JDK 25 also brings performance improvements to the class String, by allowing the String::hashCode function to take advantage of a compiler optimization called constant folding. Developers who use strings as keys in a static unmodifiable Map should see significant performance boosts, according to a May 1 article on Oracle’s Inside Java web page.

JDK 25 comes on the heels of JDK 24, a six-month-support release that arrived March 18. As a long-term support (LTS) release, JDK 25 will get at least five years of Premier support from Oracle. JDK 25 is due to arrive as a production release on September 16, following rampdown phases in June and July and two release candidates planned for August. The most recent LTS release was JDK 21, which arrived in September 2023.

Early access builds of JDK 25 can be downloaded from jdk.java.net.

Structured concurrency was previewed previously in JDK 21 through JDK 24, after being incubated in JDK 19 and JDK 20. Structured concurrency treats groups of related tasks running in different threads as single units of work. This streamlines error handling and cancellation, improves reliability, and enhances observability, the proposal states. The primary goal is to promote a style of concurrent programming that can eliminate common risks arising from cancellation and shutdown, such as thread leaks and cancellation delays. A second goal is to improve the observability of concurrent code. JDK 25 introduces several API changes. In particular, a StructuredTaskScope is now opened via static factory methods rather than public constructors. Also, the zero-parameter open factory method covers the common case by creating a StructuredTaskScope that waits for all subtasks to succeed or any subtask to fail.

Flexible constructor bodies was previewed in JDK 22 as “statements before super(…)” as well as in JDK 23 and JDK 24. The feature is intended to be finalized in JDK 25. In flexible constructor bodies, the body of a constructor allows statements to appear before an explicit constructor invocation such as super (…) or this (…). These statements cannot reference the object under construction but they can initialize its fields and perform other safe computations. This change lets many constructors be expressed more naturally and allows fields to be initialized before becoming visible to other code in the class, such as methods called from a superclass constructor, thereby improving safety. Goals of the feature include removing unnecessary restrictions on code in constructors; providing additional guarantees that state of a new object is fully initialized before any code can use it; and reimagining the process of how constructors interact with each other to create a fully initialized object.

Module import declarations, which was previewed in JDK 23 and JDK 24, enhances the Java language with the ability to succinctly import all of the packages exported by a module. This simplifies the reuse of modular libraries but does not require the importing code to be in a module itself. Goals include simplifying the reuse of modular libraries by letting entire modules be imported at once; avoiding the noise of multiple type import-on-demand declarations when using diverse parts of the API exported by a module; allowing beginners to more easily use third-party libraries and fundamental Java classes without having to learn where they are located in a package hierarchy; and ensuring that module import declarations work smoothly alongside existing import declarations. Developers who use the module import feature should not be required to modularize their own code.

Compact source files and instance main methods evolves the Java language so beginners can write their first programs without needing to understand language features designed for large programs. Beginners can write streamlined declarations for single-class programs and seamlessly expand programs to use more advanced features as their skills grow. Likewise, experienced developers can write small programs succinctly without the need for constructs intended for programming in the large, the proposal states. This feature, due to be finalized in JDK 25, was previewed in JDK 21, JDK 22, JDK 23, and JDK 24, albeit under slightly different names. In JDK 24 it was called “simple source files and instance main methods.”

Stable values are objects that hold immutable data. Because stable values are treated as constants by the JVM, they enable the same performance optimizations that are enabled by declaring a field final. But compared to final fields, stable values offer greater flexibility regarding the timing of their initialization. A chief goal of this feature, which is in a preview stage, is improving the startup of Java applications by breaking up the monolithic initialization of application state. Other goals include enabling user code to safely enjoy constant-folding optimizations previously available only to JDK code; guaranteeing that stable values are initialized at most once, even in multi-threaded programs; and decoupling the creation of stable values from their initialization, without significant performance penalties.

Removal of the 32-bit x86 port involves removing both the source code and build support for this port, which was deprecated for removal in JDK 24. The cost of maintaining this port outweighs the benefits, the proposal states. Keeping parity with new features, such as the foreign function and memory API, is a major opportunity cost. Removing the 32-bit x86 port will allow OpenJDK developers to accelerate the development of new features and enhancements.

Other features that could find a home in JDK 25 include a key derivation function API, scoped values, and primitive types in patterns, instanceof, and switch, all of which were previewed in JDK 24. The vector API, which was incubated nine times from JDK 16 to JDK 24, also could appear in JDK 25.

(image/jpeg; 0.13 MB)

The best new features and fixes in Python 3.14 7 May 2025, 12:56 pm

The first beta release of Python 3.14 is now available. This article presents a rundown of the most significant new features in the next version of Python and what they mean for Python developers.

Major new features in Python 3.14

These are the most significant new features in Python 3.14:

  • Template strings
  • Deferred evaluation of annotations
  • Better error messages
  • A safe external debugger interface to CPython
  • A C API for Python runtime configuration
  • “Tail-call-compiled” interpreter

Template strings

We’ve long used f-strings in Python to conveniently format variables in a string. Python 3.14 introduces an even more advanced feature in this vein, template strings as defined in PEP 750.

A template string, or t-string, lets you combine the template with a function that operates on the template’s structure, not just its output. You could write a template handler that allows all variables placed in the template, or only variables of a specific type, or only variables that match some output, to be manipulated at output time. You could also handle the variables and the interpolating text as separate, differently typed objects.

For instance, if you have the template t"My name is {user_name}, and I'm from {user_locale}", you could have the variables user_name and user_locale automatically cleaned of any HTML before display. You could also perform transformations on the My name is and and I'm from portions of the output automatically, as those would be tagged with the special type Interpolation.

Template strings will make it far easier to write template engines, e.g., Jinja2, or to duplicate much of the functionality of those template engines directly in Python without the overhead of third-party libraries.

Deferred evaluation of annotations

Type annotations in Python have historically been evaluated “eagerly,” meaning when they’re first encountered in code. This made it difficult to do things like forward references for a type—e.g., to have a class method that takes a parameter hinted with another type that isn’t defined at that point in the module:


class Thing:
    def frob(self, other:OtherThing):
        ...
class OtherThing:
    ...

Code like this would typically not pass linting. The workaround would be:


class Thing:
    def frob(self, other:"OtherThing"):
        ...
class OtherThing:
    ...

Or you could use from __future__ import annotations.

With Python 3.14, annotations for objects are now stored in “annotate functions,” which are available through an __annotate__ attribute. This attribute returns the annotation for a given object when it’s needed, so annotations can be evaluated lazily by linters or even evaluated at runtime.

The annotationlib module provides tools for inspecting these new annotations at runtime or as part of a linting process.

If you’re currently using from __future__ import annotations as your workaround for handling deferred annotations, you don’t need to change anything just yet, although this directive is deprecated and will be removed entirely in future Python versions (most likely sometime after 2029).

Better error messages

Over the last several Python revisions, error messages have been improved and polished in many ways. This tradition continues with Python 3.14.

The biggest improvement: Unknown terms that closely match Python keywords now elicit suggestions. For example:


forr a in b:
^^^^
SyntaxError: invalid syntax. Did you mean 'for'?

Many other error improvements also have been rolled in:

  • Issues with unpacking, where there is a mismatch between the expected and received number of items, now have more detailed errors in more cases than before.
  • Misplaced elif blocks generate their own specific error message.
  • Statements that illegally combine ternary assignment and control flow (e.g., x = a if b else pass) now generate detailed error messages.
  • Incorrectly closed strings will elicit suggestions for completing the string.

A safe external debugger interface to CPython

Attaching an external debugger to the CPython interpreter comes at the cost of a lot of runtime overhead, and potential safety issues. Plus, you can’t attach a debugger to CPython spontaneously. In order to use an external debugger, you have to launch CPython with the debugger already attached.

With Python 3.14, a new debugger interface provides hooks into the CPython interpreter for attaching a debugger without changing its execution. You can use Python’s pdb
debugging module to attach to another Python process (by way of its process ID) and perform interactive debugging on the target process without having to restart the process with the debugger attached.

Aside from the added convenience, the new debugger interface also makes it easier for third parties to write better and more robust debugging tools. They will no longer need to inject their own custom code into the interpreter, which can be brittle and produce new bugs.

A C API for Python runtime configuration

The Python configuration C API provides a C API to let users set or get information about the current configuration of the Python interpreter, using Python objects rather than C structures. This way, the interpreter can be configured directly from Python itself, making it easier to write Python-level tools to make runtime changes to interpreter behavior.

The Python configuration C API is part of a general cleanup of CPython internals and APIs, including the APIs for how CPython is initialized. Note that C users can always drop back to the lower-level APIs if they need them.

Easier ways to use `except` for multiple exceptions

If you want to catch multiple exceptions in a try/except block, you have had to use parentheses to group them:


try:
    flaky_function()
except (BigProblem, SmallProblem):
    ...

With Python 3.14, you can simply list multiple exceptions separated by commas:


try:
    flaky_function()
except BigProblem, SmallProblem:
    ...

The original syntax still works, of course, but the new syntax means a little less typing for a common scenario.

‘Tail-call-compiled’ interpreter

The CPython interpreter in Python 3.14 can use a feature in C code that uses tail calls between functions. When compiled with a C compiler that supports these features, CPython runs slightly faster. Note that this feature isn’t the same thing as enabling tail call optimizations in the Python language; it is an internal optimization for the CPython interpreter. Python developers don’t need to do anything except upgrade to Python 3.14 to see a benefit.

Unfortunately, the original estimated performance improvements for this change turned out to be wildly off, due to a compiler bug in Clang/LLVM19 (since fixed in subsequent releases). The performance improvement falls in the range of 3% and 5%, well short of the 9% to 15% speedup originally reported. As with any optimization like this, you’ll want to run your own benchmarks to see how well your applications perform.

(image/jpeg; 3.26 MB)

IBM’s watsonx.data could simplify agentic AI-related data issues 7 May 2025, 5:25 am

IBM has updated its data management platform — watsonx.data — to simplify data challenges related to agentic AI and include capabilities and tools that could help enterprises manage, analyze, and govern data more effectively.

IBM said enterprises’ biggest hurdle to unlock agentic AI and generative AI’s full potential is unstructured data and not inferencing costs.

“Many enterprises are not addressing the root problem. They are focused solely on the generative AI application layer, rather than the essential data layer underneath. Until enterprises fix their data foundation, AI agents and other generative AI initiatives will fail to deliver their full potential,” Edward Calvesbert, VP of product management for watsonx, wrote in a blog post.

Agentic AI or applications underpinned by it that can complete tasks without manual intervention while learning from new data are attracting the attention of enterprises looking to automate more work, and cloud services providers, such as IBM, want to capture this growing market.

Unifying data and governance across multiple formats, clouds

The new updates, which are an effort in that direction, comprise new capabilities inside watsonx.data that bring together an open data lakehouse with data fabric capabilities, such as data lineage tracking and governance, to help enterprises have a single control pane for managing data across formats and multiple clouds.

The combo of an open data lakehouse along with data fabric capabilities should help IBM customers simplify their data stack, said Bradley Shimmin, lead of the data and analytics practice at The Futurum Group.

“Especially for those customers who are looking to use more of their data in powering AI solutions without sacrificing important governance and management capabilities like lineage and cataloging to better understand the origin, movement, and transformation of data across disparate sources and deployment models (cloud, premises, hybrid),” Shimmin added.

New tools for data orchestration and analysis

As part of the updates, IBM has added new tools to watsonx.data for data orchestration and analysis, both of which will also be available as standalone products.

For orchestrating data across formats and pipelines, IBM has introduced a single-interface tool — watsonx.data integration.

Although IBM didn’t provide more details about the tool, Shimmin believes that it makes sense for IBM to invest in the tool as it wants to develop watsonx.data as a platform of choice for customers to create their central data repository.

However, he pointed out that IBM’s efforts in the direction of delivering a single tool to locate, integrate, catalog, transform, and instantiate data as a product are not unique,e and most data lakehouses and broader data platforms have tried to reach the same goal.

“The tool or its capability is mission-critical for AI-aspiring businesses right now. And IBM’s new tool is not yet on par with products from Informatica, Alteryx, Snowflake, Google, and AWS,” Shimmin explained.

For data analysis and extracting insights from data using AI, IBM has added the watsonx.data intelligence tool.

IBM hasn’t provided additional details about this tool as well, but Shimmin believes that the new tool cannot be compared with similar tools from other BI software vendors and data lakehouse providers, as its purpose is to serve the data integration lifecycle.

However, he pointed out that the new intelligence tool “appears to be uniquely tuned” to directly address the challenge of working with unstructured data, especially in highly distributed environments and in supporting diverse data formats.

“This tool is really tuned to help not just developers but a pretty wide array of user roles, such as data engineers, data scientists, data analysts, and even business users,” Shimmin added.

The new updates to watsonx.data, along with the new tools, are expected to be available in June.

(image/jpeg; 5.89 MB)

How to gracefully migrate your JavaScript programs to TypeScript 7 May 2025, 5:00 am

TypeScript is a variant of JavaScript that provides strong type information, a powerful development tool that minimizes bugs and makes JavaScript programs easier to build in enterprise settings. TypeScript runs wherever JavaScript does and compiles to JavaScript itself. And, any existing JavaScript program is already valid TypeScript, just without the type information TypeScript provides.

All of this means you can take an existing JavaScript program and transform it into TypeScript, and you can do it incrementally. You don’t have to scrap all your JavaScript code and start from a blank page; you can work with your existing JavaScript codebase and migrate it a little at a time. Or, if you want, you can begin completely from scratch and write new code directly in TypeScript, with all its type-checking features dialed up.

Setting up the TypeScript compiler

TypeScript is a completely separate project from JavaScript, so you can’t work with TypeScript properly without including its compiler.

To get set up with TypeScript, you’ll need Node.js and npm:

npm install -g typescript

You can also use other projects in the JavaScript ecosystem to work with TypeScript. Bun, for instance, bundles the TypeScript compiler automatically, so there’s nothing else to install. The Deno runtime also has TypeScript support built in. If you’ve been mulling making the jump to one of those projects anyway, why not do it with TypeScript?

Compiling TypeScript to JavaScript

The most basic way to compile existing JavaScript to TypeScript is to just run the TypeScript compiler on some JavaScript code:

tsc myfile.ts

TypeScript files use .ts as their file extension. When run through the compiler, they are transformed into .js files of the same name in the same place.

This is fine for one-offs, but you most likely have a whole project directory of files that need compiling. To do that without excess hassle, you’ll need to write a simple configuration file for your project to work with TypeScript’s compiler.

TypeScript’s config file is typically named tsconfig.json, and lives in the root directory for your project. A simple tsconfig.json might look like this:

{
  "compilerOptions": {
    "outDir": "./jssrc",
    "allowJs": true,
    "target": "es6",
    "sourceMap": true
  },
  "include": ["./src/**/*"]
}

The first section, compilerOptions, tells the compiler where to place a compiled file. "outDir": "./jssrc" means all the generated .js files will be placed in a directory named jssrc (for “JavaScript source,” but you can use any name that fits your project layout). It also specifies that it will accept regular JavaScript files as input ("allowJs": true), so that you can mingle JavaScript and TypeScript files freely in your src folder without issues.

If you don’t specify outDir, the JavaScript files will be placed side-by-side with their corresponding TypeScript files in your source directory. You may not want to do this for a variety of reasons—for instance, you might want to keep the generated files in a separate directory to make them easier to clean up.

Our config file also lets us define what ECMAScript standard to compile to. "target": "es6" means we use ECMAScript 6. Most JavaScript engines and browsers now support ES6, so it’s an acceptable default. Specifying "sourceMap": true generates .js.map files along with all your generated JavaScript files, for debugging.

Lastly, the include section provides a glob pattern for where to find source files to process.

tsconfig.json has tons more options beyond these, but the few mentioned here should be plenty to get up and running.

Once you set up tsconfig.json, you can just run tsc in the root of the project and generate files in the directory specified by outDir.

Try setting this up now in a copy of a JavaScript-based project you currently have, with at least one .ts-extension file. Be sure to specify an outDir that doesn’t conflict with anything else. That way, the changed files built by the compiler will not touch your existing files, so you can experiment without destroying any of your existing work.

Adding TypeScript type annotations to existing code

Once you’ve set up the compiler, the next thing to do is start migrating your existing code to TypeScript.

Since every existing JavaScript file is already valid TypeScript, you can work a file at a time, incrementally, by renaming existing .js files as .ts. If a JavaScript file has no type information or other TypeScript-specific syntax, the TypeScript compiler will not do anything with it. The compiler only starts scrutinizing the code when there’s something TypeScript-specific about it.

One way to get started with TypeScript annotations is by adding them to function signatures and return types. Here’s a JavaScript function with no type annotations; it’s a way to generate a person’s name in a last-name-first format, based on some object with .firstName and .lastName properties.

function lastNameFirst(person) {
    return `${person.lastName}, ${person.firstName}`;
}

TypeScript lets us be much more explicit about what’s accepted and returned. We just provide type annotations for the arguments and return value:

function lastNameFirst(person: Person): string {
    return `${person.lastName}, ${person.firstName}`;
}

This code assumes we have an object type named Person defined earlier in the code. It also leverages string as a built-in type to both JavaScript and TypeScript. By adding these annotations, we now ensure any code that calls this function must supply an object of type Person. If we have code that calls this function and supplies DogBreed instead, the compiler will complain:

error TS2345: Argument of type 'DogBreed' is not assignable to parameter of type 'Person'.
  Type 'DogBreed' is missing the following properties from type 'Person': firstName, lastName

One nice thing about the error details is you get more than just a warning that it isn’t the correct type—you also get notes as to why that type doesn’t work for a particular instance. This doesn’t just hint at how to fix the immediate problem but also lets us think about how our types could be made broader or narrower to fit our use cases.

Interface declarations

Another way to describe what types can be used with something is via an interface declaration. An interface describes what can be expected from a given thing, without needing to define that thing in full. For instance:

interface Name {
    firstName: string;
    lastName: string;
}

function lastNameFirst(person: Name): string {
    return `${person.lastName}, ${person.firstName}`;
}

In a case like this, we could supply any type we wanted to lastnameFirst(), as long as it had .firstName and.lastName as properties, and as long as they were string types. This lets you create types that are about the type shape of the thing you’re using, rather than whether it’s some specific type.

Figuring out what TypeScript types to use

When you annotate JavaScript code to create TypeScript, most of the type information you apply will be familiar, since they’ll come from JavaScript types. But how and where you apply those types will take some thought.

Generally, you don’t need to provide annotations for literals since those can be inferred automatically. For instance, name: string = "Davis"; is redundant, since it’s clear from the assignment to the literal that name will be a string. (Many anonymous functions can also have their types inferred in this way.)

The primitive types—string, number, and boolean—can be applied to variables that use those types and where they can’t be inferred automatically. For arrays of types, you can use the type followed by []—e.g., number[] for an array of numbers—or you can use the syntax Array (in this case, Array).

When you want to define your own type, you can do so with the type keyword:

type FullName = {
    firstName: string;
    lastName: string;
};

This could in turn be used to create an object that matches its type shape:

var myname:FullName = {firstName:"Brad", lastName:"Davis"};

However, this would generate an error:

var myname:FullName = {firstName:"Brad", lastName:"Davis", middleName:"S."};

The reason for the error is that the middleName isn’t defined in our type.

You can use the | operator to indicate that something can be one of several possible types:

type userName = Fullname | string;

// or we can use it in a function signature ...

function doSomethingWithName(name: Fullname|string) {...}

If you want to create a new type that’s a composite of existing types (an “intersection” type), use the & operator:

type Person = {
    firstName: string;
    lastName: string;
};
type Bibliography = {
    books: Array;
};

type Author = Person & Bibliography;

// we can then create an object that uses fields from both types:

var a: Author = {
    firstName: "Serdar", lastName: "Yegulalp", books:
        ["Python Made Easy", "Python Made Complicated"]
};

Note that while you can do this with types, if you want to do something similar with interfaces you need to use a different approach; namely, using the extends keyword:

interface Person {
  firstName: string;
  lastName: string;
}

interface Author extends Person {
  penName: string;
}

JavaScript classes are also respected as types. TypeScript lets you use them as-is with type annotations:

class Person {
    name: string;
    constructor(
        public firstName: string,
        public lastName: string
    ) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.name = `${firstName} ${lastName}`;
    }
}

TypeScript also has a few special types for dealing with other cases. any means just that: any type is accepted. null and undefined mean the same things as in regular JavaScript. For instance, you’d use string|null to indicate a type that would either be a string or a null value.

TypeScript also natively supports the ! postfix operator. For instance, x!.action() will assert in TypeScript that .action() will be called on x as long as x isn’t null or undefined.

If you want to refer to a function that has a certain signature way of a type expression, you can use what’s called a “call signature”:

function runFn(fn: (arg: number) => any, value: number): any {
    return fn(value);
}

runFn would accept a function that takes a single number as an argument and returns any value. runFn would take in such a function, plus a number value, and then execute that function with the value.

Note that here we use the arrow notation to indicate what the passed function returns, not a colon as we do the main function signature.

Building a TypeScript project

Many build tools in the JavaScript ecosystem are now TypeScript-aware. For instance, the frameworks tsdx, Angular, and Nest all know how to automatically turn a TypeScript codebase into its matching JavaScript code with little intervention on your part.

If you’re working with a build tool like Babel or webpack (among others), those tools can also handle TypeScript projects, as long as you install TypeScript handling as an extension or enable it manually. For instance, with webpack, you’d install the ts-loader package through npm, and then set up a webpack.config.js file to include your .ts files.

The key to moving an existing JavaScript project to TypeScript is to approach it a step at a time—migrate one module at a time, then one function at a time. Because TypeScript can coexist with regular JavaScript, you are not obliged to migrate everything at once, and you can take the time to experiment with figuring out the best types to use across your project’s codebase.

(image/jpeg; 7.18 MB)

8 ways to do more with modern JavaScript 7 May 2025, 5:00 am

JavaScript is an incredibly durable, versatile, and capable language, and often provides everything you need right out of the box. The foundation for success is knowing the full expanse of what JavaScript offers and how to leverage it in your programs. Here are eight key concepts for developers who want to get the most out of the universe of tools and libraries available in JavaScript today.

Use variable declarations

Although variables are as old as programming itself, they’re still a key concept in modern JavaScript. To start, consider that we prefer const over let in JavaScript programming. Why is that?

A const declares a constant, which is a variable that does not change. We use const any time we can because its immutability makes it less complex. You don’t have to think about how an immutable variable behaves or how it might change throughout the life of the program. const lets you store a value so you can use it wherever you want, without worrying about what would happen if that value changed.

Immutability is a quietly profound concept that echoes throughout software design, especially in functional and reactive programming, where it is leveraged to simplify the overall structure of larger systems.

Something else that is important to know about const is how it works on objects and collections. In these cases, const works to prevent changing the reference to the variable, but it does not prevent alterations to the variable’s internal state. This reveals something important about the internal structure of JavaScript. (Under the hood, object and collection variables are pointers, which means they hold a place in memory. Choosing to use const means we can’t change the place.)

Of course, there are times when we need a variable that truly is a variable, and for that, we use let. JavaScript also has a var keyword. Knowing the difference between let and var can help you understand variable scoping, which helps with more advanced ideas like scopes and closures, which I’ll discuss shortly.

The let declaration restricts the variable to the block where it is declared, whereas var “hoists” its variable to the containing scope. var is more visible and also more prone to error. It’s a good idea to refactor using let whenever you find a var in your code.

Understand collections and functional operators

Functional operators are some of the coolest and most powerful features of modern JavaScript. Operators like map, flatMap, reduce, and forEach let you perform repetitions on collections with a clean, self-documenting syntax. Especially for simpler operations, functional programming constructs like these can make your code read very directly, giving you the meaning you need without a lot of verbiage related to iteration.

When you are writing a program, you are usually trying to handle some kind of business function—say, taking a response from an API and doing something to it based on user input. Within that task, you need a loop, but the loop is just a necessary bit of logic that supports the overall intention. It shouldn’t take up too much space in the program. Functional operators let you describe the loop with minimal obscuring of the overarching meaning.

Here’s an example:


const albums = [
  { artist: "Keith Jarrett", album: "The Köln Concert", genre: "Jazz" },
  { artist: "J.S. Bach", album: "Brandenburg Concertos", genre: "Classical" },
  { artist: "The Beatles", album: "Abbey Road", genre: "Rock" },
  { artist: "Beastie Boys", album: "Ill Communication", genre: "Hip Hop"}];

genreInput = "rock";

console.log(
  albums.filter(album => album.genre.toLowerCase() === genreInput.toLowerCase())
)

The overall intention of the above code is to filter the list of albums based on genre. The built-in filter method on the albums array returns a new collection with the passed-in function applied. (This style of returning rather than manipulating the original array is another example of immutability in action.) 

The looping logic is reduced to its simple essence in service of the surrounding meaning. It’s worth noting that traditional loops still have a huge role to play, especially in very complex loops with multiple iterators, or in very large loop bodies where the curly-braced code blocks can simplify following things, particularly with nested loops.

Take advantage of promises and async/await

Asynchronous programming is inherently tricky because it by definition involves multiple actions occurring at once. This means we have to think about the interleaving of events. Fortunately, JavaScript has strong abstractions around these concepts. Promises are the first line of defense in managing async complexity and the async/await keywords give you another layer on top of promises. You can use async/await to write asynchronous operations in a synchronous-looking syntax.

As a software developer, you often consume promises or async functions in libraries. The ubiquitous fetch function built into the browser (and also server-side platforms like Node) is a great example:


async function getStarWarsPerson(personId) {
  const response = await fetch(`https://swapi.dev/api/people/${personId}/`);
  if (response.ok) {
    // ... 
  }

The function we define has async on it, while the function we consume (fetch) is modified with await. It looks like a normal line of synchronous code but it allows the fetch to happen in its own time, followed by whatever comes next. This sequence frees the event loop to do other things while the fetch proceeds. (I left out error handling in this example, but I address it a little further down.)

Promises are not too difficult to understand, but they do put you more into the semantics of actual asynchronous operations. This makes them more involved, but also quite powerful. The idea is that a Promise object represents an asynchronous operation and its resolve and reject methods represent the outcome. Client code then handles the results using the callback methods then() and catch().

One thing to keep in mind is that JavaScript is not truly concurrent. It uses asynchronous constructs to support parallelism but there is only one event loop, representing a single operating system thread.

Know these five syntax shortcuts

For evidence of JavaScript’s commitment to improving developer experience, look no further than its powerful shortcuts. These are slick operators that make mere keystrokes of some of the most common and clunky corners of JavaScript programming.

Spread

The spread operator (or ellipses operator) lets you reference the individual elements of an array or object:


const originalArray = [1, 2, 3];
const copiedArray = [...originalArray];
copiedArray.push('foo'); // [1,2,3,’foo’]

Spread also works for objects


const person = { name: "Alice", age: 30 };
const address = { city: "New York", country: "USA" };
const fullInfo = { ...person, ...address }; 

Destruct

Destructing gives you a shorthand to “expand” the elements of an array or object into its parts:


const colors = ["red", "green", "blue"];
const [firstColor] = colors; 
firstColor === “red”;

const person = { name: "Alice", age: 30, city: "London" }; 
const { city } = person;
city === “London”;

This sweet syntax is often seen when importing modules, among other things:


const express = require('express'); 
const { json, urlencoded } = require('express');

Destructing also supports named parameters and defaults.

Optional chaining

Optional chaining takes the old practice of manual null checks and turns it into a single, breezy operator:


const street = user?.profile?.address?.street;

If any of the roots or branches in that dot access chain are null, the whole thing resolves to null (instead of throwing a null pointer exception). This should give you a welcome sigh of relief.

Logical assignment

Logical assignment comes in and, or, and strict nullish variants. Here’s the latter:


let myString = null;
myString ??= “Foo”;
myString ??= “Bar”;
myString === “Foo”;

Note that myString only changes if it’s actually null (or undefined).

Nullish coalescence

Nullish coalescence lets you easily choose between a variable that might be null and a default:


let productName = null; 
let displayName = productName ?? "Unknown Product";
productName === “Unknown Product”;

These niceties are a distinctive characteristic of modern JavaScript. Using them judiciously makes your code more elegant and readable.

Do not fear scopes and closures

When it comes to the nuts and bolts of using JavaScript as a language, scopes and closures are essential concepts. The idea of scope is central to all languages. It refers simply to a variable’s visibility horizon: once you declare a variable, where can it be seen and used?

A closure is the way a variable’s scope acts in special circumstances. When a new function scope is declared, the variables in the surrounding context are made available to it. This is a simple idea—don’t let the fancy name fool you. The name just means the surrounding scope “closes around” the inner scope.

Closures have powerful implications. You can use them to define variables that matter to your larger context, then define chunks of functional blocks that operate on them (thereby strongly containing or encapsulating your logic). Here’s that concept in pseudocode:


outer context
  variable x
  function context
    do stuff with x
  x now reflects changes 

The same idea in JS:

function outerFunction() {
  let x = 10; 

  function innerFunction() {
    x = 20; 
  }

  innerFunction(); 
  console.log(x); // Outputs 20
}

outerFunction();

In the above example, innerFunction() is a closure. It accesses the variable from its parent scope (also known as the lexical scope, signifying the closure has access to the variables in scope where it is declared, rather than where it is called).

As I mentioned earlier, one of the tenets of functional programming is immutability. This is the idea that for clean designs, we avoid changing variables. Modifying x in our example goes against this guideline. Accessing the variable, though, is an essential ability. The key is to understand how it works.

This kind of closure use is even more important with functional collection operators like map and reduce. These give you a very clean syntax for doing things, and they also have access to the lexical scope where they are declared.

Fail gracefully (error handling)

Did you know that errors in computing were once called bugs because actual moths were flying into the circuits? Now we have bugs because our AI assistants confidently generate code that breaks in ways we can’t unpredict (more about that below).

As programmers, we’ll never outgrow strong error-handling practices. Fortunately, modern JavaScript enhances error handling so that it’s fairly sophisticated. It comes in two basic varieties: normal, synchronous code errors, and asynchronous event errors. Error objects sport error messages and cause objects as well as stack traces, which are dumps of the call stack when the error happened.

The main mechanism for normal errors in the good old try-catch-finally block and their origin, the throw keyword. Asynchronous errors are a bit more slippery. The syntax, using catch callbacks and reject calls on promises, along with catch blocks for async functions, is not too complex. But you need to keep an eye on those async calls and make sure you’re handling everything.

User experience is greatly impacted by careful error handling: Fail gracefully, track errors, don’t swallow errors … that last one is a very embarrassing mistake (ask me how I know).

Use the programming style that works

This one is just applied common sense. In essence, the wise JavaScript developer is ecumenical when it comes to programming paradigms. JavaScript offers object-oriented programming, functional programming, imperative programming, and reactive programming styles. Why wouldn’t you make the most of that opportunity? You can build your program around one style or blend them depending on the use case.

Modern JavaScript has strong class support as well as prototype inheritance. This is typical of JavaScript: there’s more than one way to do it, and no particular right way. Classes are so well known in the object-oriented world we mostly use them these days, but prototypes are still useful and important to know about.

JavaScript has also made functional programming popular. It has been such an effective ambassador of the approach that other languages—even strong object-oriented ones like Java—have adopted aspects of functional programming into their core. Especially when dealing with collections, functional operators are just what you need sometimes.

JavaScript is also a champion reactive language, with popular reactive tools like RxJS and Signals. Note that reactive programming is a broad and interesting topic, and not to be confused with reactive frameworks like Angular and React. They are not the same thing.

Also, don’t forget that JavaScript was originally a scripting language—it’s right there in the name. Sometimes, a good old imperative script is exactly the right level of detail for the job at hand. Don’t hesitate to go that way if it’s what you need, especially in a one-off systems utility.

There are lots of ways to use JavaScript. Don’t cheat yourself by sticking with just one.

A word about AI assistance

First of all, let’s acknowledge that AI programming is incredibly useful. We’ve only had AI programming assistants for a short time, yet already it would be almost unthinkable to build software without using some kind of AI coding assistance.

I say almost because this is also true of modern IDEs. You’d think that no one would want to develop without using something like VS Code, IntelliJ, or Eclipse. But, the fact is, I’ve seen developers who use the Posix command line together with Vim or Emacs in a way that makes a visual IDE with a mouse look clunky. You have to see the speed and efficiency of arcane hotkeys, muscle memory, and system knowledge in action to believe it.

As developers, we are going to make excellent use of AI coding in much the same way we make good use of IDEs. Some developers might code more effectively without it, but that’s because they’ve mastered the essentials. They can perceive maximum-value changes and make them, even in large and complex systems.

No matter what tools we use, the better we know our fundamentals, the better off we and our programs will be. The fundamentals translate across languages, frameworks, platforms, and operating systems, and underpin coding in both small and large implementations. There is nothing more gratifying than seeing the benefit of all that learning and groundwork you’ve done while talking to stakeholders about a critical project.

AI can’t do that for you, but you can use AI tools to empower your commitment to programming.

(image/jpeg; 10.94 MB)

Technical debt is just an excuse 7 May 2025, 5:00 am

We need to stop using the term “technical debt” as an excuse.

Ward Cunningham coined the term and gave it a precise meaning. For Cunningham, technical debt was a conscious decision. The development team, while realizing that there is a better, more “correct” way of doing the job, chooses a more expedient way that has costs down the line. Those costs, of course, are more bugs and more difficulty in maintaining the “wrong” way of doing things. 

This decision is usually made to speed things up, with the understanding that the development team will go back and “fix” things later—hence the notion of “owing a debt.” One could argue that it isn’t really technical debt unless you have a Jira ticket in the backlog to fix the deliberately bad chunk of code.

But let’s be honest here. We’ve twisted the term so far, it’s become meaningless. Not every pile of crappy code in your repository is technical debt. We call it that, but how much of it was a deliberate decision? How much of it has a plan in your backlog to fix it? Not much, right?

Technical undebt

The term “technical debt” has lost its meaning. Now, we use the term to mean “all of the awful code we have in our system that we know we’ll never go back and fix because it is both too costly and too risky to change.” 

All of this crappy code has three origins:

  1. Technical debt: This is the code that you know is sub-par, but that you decided to write for good reasons, and that you have a plan for correcting. Let’s face it—hardly any code out there fits this description. How many development teams actually have a plan for paying back technical debt? Not a lot.
  2. Accidental complexity: Fred Brooks coined this term, which perfectly describes code that isn’t right and that results not from negligence or bad coding skills, but because no one understood the system and made bad decisions. Maybe the team chose a framework that was way too heavy for the task at hand. Maybe the team created unnecessary abstractions or added a feature in a way that doesn’t match the system. Sadly, this is the kind of thing that doesn’t appear until well after the fact.
  3. Just bad code: Most of what gets called technical debt is just rushed, slapped-together, or “emergency” code that was never reviewed, or was glossed over because it “worked” and the customer was screaming. band-aids for customer fire drills, critical bug fixes that were checked in over the weekend, or artifacts of developers working without enough time, clarity, or support.

A pretty label

The problem with calling it all technical debt is that it puts a pretty label on avoidable problems. We give ourselves an excuse to do the wrong thing because we can give it a fancy name that implies we’ll “pay it back” later, when everyone knows that we never will. When the team is allowed to use the term to justify not doing things the right way, you’ve got a culture in decline. 

In addition, labeling all the bad stuff technical debt can lead to justifying bad decisions and practices. It can hide problems like under-investment in engineering and toxic, constant deadline pressure. 

So let’s stop doing it. Let’s all agree that we can’t call it technical debt unless we actually have a backlog item to fix it. Real technical debt should have a work ticket, a correction plan, and a deadline. Anything else should be recognized for what it is: crappy code. Let’s build a culture where we have real technical debt, and where we call everything else by the right name. Let’s reserve “technical debt” for what it actually is: a conscious tradeoff with a repayment plan.

Everything else? It’s not technical debt. It’s plain old code rot.

(image/jpeg; 0.15 MB)

Google updates Gemini 2.5 Pro model for coders 6 May 2025, 5:58 pm

Google has updated its Gemini 2.5 Pro AI model with stronger coding capabilities, hoping developers would begin building with the model before the Google I/O developer conference later this month.

Released May 6, Gemini 2.5 Pro Preview (I/O edition) is accessible in Google AI Studio tool and in Vertex AI.

Commenting on the update, Google said developers could expect meaningful improvements for front-end and UI development alongside improvements in fundamental coding tasks including transforming and editing code and building sophisticated agentic workflows. Developers already using Gemini 2.5 Pro will find not only improved coding performance but reduced errors in function calling and improved function calling trigger rates, the company said.

The Gemini 2.5 Pro model shines for use cases such as video to code, due to the model’s “state-of-the-art video understanding,” and offers easier web feature development, due to the model’s “best-in-class front-end web development,” Google said.

(image/jpeg; 20.94 MB)

Page processed in 0.282 seconds.

Powered by SimplePie 1.3.1, Build 20121030175403. Run the SimplePie Compatibility Test. SimplePie is © 2004–2025, Ryan Parman and Geoffrey Sneddon, and licensed under the BSD License.