Your AI Agent Arrives Cloud-Ready 🤖

Every PyRun workspace launches with GitHub Copilot and OpenCode pre-configured via MCP servers — already connected to your AWS account. Ask your AI to deploy, monitor, query, and scale your cloud workloads in natural language. Plus full ML framework support for training and inference.

New

MCP Servers

Model Context Protocol servers auto-connect your AI agents to your AWS account on every workspace launch. Zero manual configuration needed.

New

GitHub Copilot

Pre-installed and cloud-aware. Copilot can see your AWS resources, suggest deployment code, and help debug distributed Python workloads.

New

OpenCode

An open-source AI coding agent also pre-configured in your PyRun workspace, connected to your AWS environment via MCP from the start.

New

Natural Language Ops

Ask your AI to list S3 buckets, check Lambda logs, trigger a job, or monitor usage. Cloud operations through conversation.

Streamlined AI/ML Workflows

Simplify complex AI pipelines. PyRun's integrated environment and automated management let you focus on model development and experimentation.

Popular AI Frameworks

Easily configure runtimes for TensorFlow, PyTorch, Scikit-learn, Dask-ML, and more using PyRun's Runtime Management.

Scalable Data & Training

Leverage Dask and Lithops for distributed data preprocessing, large-scale model training, and hyperparameter tuning.

Interactive LLM Environments

Run and experiment with Large Language Models (LLMs) like Llama 3 locally within your PyRun workspace using Ollama, directly via notebooks.

How It Works

From Workspace Launch to Cloud Action in Seconds

PyRun handles all the wiring so your AI agent is cloud-ready the moment your workspace opens.

1

Launch a Workspace

Create a PyRun workspace with a single click. Your VS Code-like IDE starts immediately.

2

MCP Connects to AWS

MCP servers automatically discover and connect to your AWS account. No manual config, no credentials to paste.

3

AI Agent is Ready

GitHub Copilot and OpenCode are pre-installed and fully aware of your cloud environment, S3 data, and available services.

4

Talk, Code, Deploy

Ask your agent to write a Lithops job, trigger a Lambda, analyze S3 data, or debug a pipeline. Then click Run.

Ready-to-Run Examples

AI Use Cases & Pre-built Pipelines

Get started quickly with pre-built AI pipelines showcasing PyRun's capabilities for various ML tasks.

Audio Recognition with TensorFlow

An AI pipeline for audio keyword recognition using TensorFlow. Demonstrates audio data preprocessing, model training, and inference.

Learn More →

Distributed Learning with Dask-ML

Shows how to use Dask-ML for distributed machine learning tasks, scaling your model training across multiple nodes for efficiency.

Learn More →

Image Classification with TensorFlow

An AI pipeline for image classification tasks using TensorFlow. Covers data loading, model building, data augmentation, and evaluation.

Learn More →

LLM Execution with Ollama

An interactive notebook environment to run and experiment with Large Language Models (LLMs) like Llama 3 locally within your PyRun workspace using Ollama.

LLM Execution Pipeline in PyRun
Learn More →
Future of AI with PyRun illustration
Continuous Innovation

The Future of AI on PyRun 🚀

We're continuously expanding PyRun's AI capabilities. Upcoming enhancements will make the agentic development experience even more powerful:

  • More MCP connectors for additional AWS services and third-party APIs.
  • AI-powered automated pipeline generation from natural language descriptions.
  • Expanding our library of pre-built AI/ML pipelines and use-case templates.
  • Deeper integration for data preprocessing, model training, evaluation, and MLOps.
Check Our Roadmap

Ready to Code in the Cloud with AI?

Launch a workspace with GitHub Copilot and OpenCode pre-connected to your AWS account. Train ML models, deploy serverless jobs, and chat with your cloud — all for free during beta.

Get Started for Free