Accelerating the convergence of AI and Physics-based Modelling

Overview

The physics-based models needed to validate and train their AI counterparts can become the bottleneck if they are not 1) written in a modern style; 2) parallel and scalable; and 3) accelerated. Many models have yet to achieve these goals, and for a very good reason: refactoring takes a long time, is risky, and is very expensive. For example, the DoE's ACME/E3SM project, which targeted acceleration and enhancement of ES model for running on Exascale platform has taken 10 years and had a 2023 budget of about $31M. Slow refactoring is lethal, especially in modeling systems where scientific improvements are continuously being rolled out.

Our view is that the current refactoring process is at least 10-100 times too slow and expensive. AI assistants have proven very useful in software development, and are constantly improving, but currently do not have the context scope to handle large, complex code. We are building a suite of compositable tools that will greatly increase the effectiveness of teams composed of AI assistants and people.

Our Strategy

  • A Hybrid Team: Our team brings together top-notch refactoring, machine learning, and data science expertise.
  • An Open Source Approach: Building off open-source tools and models builds community, which builds stronger tools and paying customers.
  • Create Powerful Refactoring Tools: Our strategy emphasizes integrating AI assistants with powerful refactoring tools to allow humans and AIs to collaborate effectively.

Current Progress

Information about current progress and achievements will be displayed here.