Nvidia H100s are Available On Demand 🚀

Nvidia H100's VMs are now available to start right away on Paperspace by DigitalOcean. Try it out by going to start an H100 VM. ⚡️

What you’ll have access to:

  • Nvidia H100: Access H100 machines in either an H100x1 (single chip) or H100x8 (entire H100 host) configuration with everything working straight out of the box on the update ML in a Box VM template
  • Faster Networking: By the end of January, all users on H100x8 configurations will have access to 3.2TB interconnect speeds for intense multi-node training runs

On-Demand Access & Guaranteed Access

Start H100 on-demand

Start Now

Guarantee access to H100s
(term discounts & multi-node training available)

Schedule a Call

Other Improvements

Simplified Container Registry Experience

  • Redesign of the container registry experience to provide field validation/checks and enhanced management of existing private container registries
  • New experience in adding containers into Gradient deployments to ensure successful deployment starts

Endpoint Security on Gradient Deployments

  • When creating a deployment, you can choose whether to set the endpoint to public or protected. A protected endpoint is secured with basic access authentication encoded tokens
  • When a user tries to access a protected endpoint they are required to supply a username/password

Improved Cloud Networking

  • Internet access now supports sustained 2Gbps connectivity inbound and outbound per VM


Paperspace x DigitalOcean Integrations

Paperspace was recently acquired by DigitalOcean and there are already improvements to the Paperspace platform.

DigitalOcean Spaces in Notebooks

Paperspace's integration with DigitalOcean Spaces can now be added as an additional data source. With the integration of DigitalOcean Spaces into Notebooks, users can easily access and manipulate their DigitalOcean Spaces through the Notebook experience. Creating a new data source and entering in the Spaces URL is all that is required to read and write data to their DigitalOcean storage.

DigitalOcean SSO

We recently added the DigitalOcean SSO integration. This feature allows users to log in to Paperspace using their DigitalOcean SSO credentials. With this integration, it is easier to access products and computing capabilities across platforms.

Compute Limits are now easier to use

We improved Compute Limit management for team administrators.  Compute Limits assist teams in monitoring and restricting monthly compute spend by providing fine-grained control over compute usage.  You can now create and edit email alerts and absolute maxes for both teams and team members from the billing page.  Administrators will receive an email when an alert or max amount is reached, and users will be notified in-app if they are blocked from accruing additional compute.  We hope these changes will give administrators more insight and control over their monthly spend.

For more information about Compute Limits, please visit our documentation.

Fixes & Improvements

  • Added the ability to edit team names
  • Improved invoice clarity by adding a section to represent Gradient storage costs
  • Improved visibility into Core machine status by more accurately rendering real-time machine state
  • Fixed a bug where some users could not scroll on streaming Core machines
  • Fixed a bug where non-US countries were not selectable during signup onboarding

Stable Diffusion on Notebooks

We just published a two part blog on getting stable diffusion up and running with Gradient notebooks using Dreambooth. In part 1, we walk through each of the steps for creating a Dreambooth concept from scratch within a Gradient Notebook, generated novel images from inputted prompts, and showed how to export the concept as a model checkpoint. In part 2, we show how to train textual inversion for Stable Diffusion, and use it to generate samples that accurately represent the features of the training images using control over the prompt.

To make it easier for our free users to take advantage of the platform, we have also released the Stable Diffusion models as Public Datasets. These can be mounted in any Gradient Notebook, and removes the need to download the files from HuggingFace each time you restart the notebook. Furthermore, these files will not count toward the storage limits for Free GPU users, so they will no longer be limited by storage space. Be sure to try out the new process!

Deployments Autoscaling

We’ve added the ability to autoscale your Gradient deployments by adding scaling criteria to the spec document. You can autoscale the deployment based on specific metrics including CPU utilization, memory utilization and # of requests. Documentation on how to get started can be found here.

Additionally, the spec has been updated with an enabled flag both for the deployment as a whole and the autoscaling feature. This can be used to turn the deployment and feature on and off. Previously, you had to go into the spec and change the number of replicas to 0 to turn a deployment off.

Activity Log

To track autoscaling events, deployment updates, and deployment start/stops we also added an activity log which can be viewed from the Activity Log tab in the project view.

Fixes and Improvements

  • Added email notifications for when Gradient Deployment do not properly get provisioned
  • Updated the Nvidia RAPIDS container to RAPIDS 22.10
  • Fixed a few documentation links in the console that were taking users to stale urls
  • Upgraded our database to support enhanced metrics to track network and database performance
  • Upgraded the Nvidia templates to support Nvidia 510 drivers
  • Fixed a bug where deployments would sometimes get deleted when team compute limits were hit
  • Fixed a bug that was preventing private S3 buckets from mounting in Gradient notebooks
  • Improved the container caching process allowing more frequent updates to notebook runtimes

Paperspace partners with Graphcore to provide IPU-powered notebooks 🔋

We're excited to launch a partnership to bring new machine learning hardware to Paperspace!

Graphcore IPUs now available in Gradient! announcement 

As of today, Gradient Notebooks users can launch IPUs from Graphcore on Paperspace -- for free up to 6 hours!

Graphcore IPUs are specialty accelerated computing chips designed to maximize machine learning workloads. 

We're pleased to offer Graphcore's IPU-POD16 machine with 10GB of free storage. 

We've made it extremely easy to get started. Just head over to the Gradient console in Paperspace, create a new notebook, and select one of the new Graphcore runtimes.

Once in the notebook it's easy to start running code.

We've created three different runtimes to start -- including Hugging Face, PyTorch, and TensorFlow 2.

TRY NOW


For more information be sure to read the announcement

DALL-E Mini and all-new free Ampere GPUs for Growth plan subscribers! 🧑‍🎨

The hype around the internet for DALL-E 2 and DALL-E Mini has been building for weeks -- and now you can train your own DALL-E Mini model on a Gradient Notebook! 

Let's get into the updates.

DALL-E Mini runtime now live on Gradient! announcement 

We're excited to release a new runtime tile for DALL-E Mini. The runtime is based on JAX and makes it easy to create generative art on high-powered Paperspace GPUs.

To get started head over to the console, create a new notebook, select the DALL-E Mini tile and get going!

New ultra-powerful A4000 and A5000 GPUs now FREE on Gradient Growth plan! announcement 

As we continue to offer the best selection of cloud GPUs on the market we also continue to extend our lead in the number of unlimited instances we offer to Gradient subscribers.

We've just added A4000 and A5000 machines to the Gradient Growth plan, which means the list of free GPUs available on Growth is longer than ever.

Check out all the free GPUs available to Gradient subscribers below!


GPU
Price
Architecture
Launch Year
GPU RAM
CPUs
System RAM
Current Street Price (2022)
M4000
Free (Gradient Free-tier)
Maxwell
2015
8 GB
8 vCPU
30 GB
$433
P4000
$8/mo (Gradient Pro)
Pascal
2017
8 GB
8 vCPU
30 GB
$859
P5000
$8/mo (Gradient Pro)
Pascal
2016
16 GB
8 vCPU
30 GB
$1,795
RTX4000
$8/mo (Gradient Pro)
Turing
2018
8 GB
8 vCPU
30 GB
$1,247
RTX5000
$8/mo (Gradient Pro)
Turing
2018
16 GB
8 vCPU
30 GB
$2,649
A4000
$8/mo (Gradient Pro)
Ampere
2021
16 GB
8 vCPU
45 GB
$1,099
A5000
$39/mo (Gradient Growth)
Ampere
2021
24 GB
8 vCPU
45 GB
$2,516
A6000
$39/mo (Gradient Growth)
Ampere
2020
48 GB
8 vCPU
45 GB
$4,599


For more information, be sure to read the docs.


Updated PyTorch, TensorFlow, and RAPIDS runtimes announcement 

We also wanted to let you know that we've rolled out updated Notebook runtimes for PyTorch, TensorFlow, and RAPIDS. 

In the notebook console you'll now find 1-click runtime tiles for PyTorch 1.12, TensorFlow 2.9.1, and RAPIDS 20.6.


Is there another runtime you wish we'd support out of the box? Let us know!

Introducing a new docs experience for Core and Gradient! 📚

New docs come to Paperspace! announcement 

We're excited to introduce an entirely new unified docs experience for Paperspace! 

After maintaining several different systems for documenting different parts of the product, we're eager to announce that Paperspace docs are now available in a single location with a new unified theme and organizational structure!

You can now find Core documentation, Gradient documentation, and general Account Management documentation all in one place!

If you need a place to start, we recommend starting with the Core overview or the Gradient overview -- you'll be able to launch right into tutorials, guides, and reference materials designed to help you succeed with Paperspace.

Have an idea for how to improve Paperspace documentation further? Please send us a note with any comments or suggestions!

All-new Linux SSH experience and improved machine create experience in Core! 🛫

We're excited to announce some brand new Core experiences! Let's jump right into what's new.

All-new Linux SSH experience announcement 

We've reconfigured the Linux machine create experience to optimize for connecting to Linux machines via SSH.

We feel that a direct connection to a Linux machine is a fantastic experience. We'll still support Linux VMs in the browser, but if you get a chance, give SSH a try -- it's so easy to connect!


Managing machines just got a lot better announcement 

We've also released a substantial cleanup of the machines settings page in Core which has made it easier than ever to access and manage machine settings. 

Let's say for example we wanted to create a snapshot of our new machine -- easy!

Or let's say we wanted to update our machine name and adjust the autoshutdown timer? Also easy!

We've also made it easier to do things like assign public IPs, generate templates, and more!

Redesigned account settings improvement

We've also updated the global Paperspace account settings to the latest design system standard. 

You'll now find tabs for Profile, Security, and SSH Keys and in general you should now find it easier to access these important settings.

Dynamic public IP addresses improvement 

  • We added support for dynamic public IP addresses which provide public IP addresses at a bare minimum of cost

Capacity upgrades improvement 

Meanwhile, we've also been busy adding plenty of capacity to Paperspace datacenters.

  • We onboarded a new fleet of RTX4000 machines to the CA1 region
  • We dramatically expanded GPU compute capacity in the NY2 region
  • We added nearly 100TB in shared storage across regions
  • And don't worry, we didn't forget about Europe! New capacity is coming soon!

Bugfixes fix 

  • We fixed a bug that was sometimes causing utilization graphs to display inaccurately



Introducing 100% self-serve private networks, shared drives, and public IPs! 🏄

We just made a number of improvements to help Core power users self-serve Paperspace resources. 

With this update, you can now create private networks, spin-up shared drives, and assign public IP addresses to any machines that you manage!

Self-serve private networks improvement 

First up, we're pleased to bring private networks to all Core users. When you create a private network, you create a shared resource pool for your team that is isolated from every other machine and customer on Paperspace.

Once you create a private network, you can add machines and drives to the network to share with team members.

Be sure to read the docs for more info!

Self-serve private storage improvement 

Next up, we've made it easy to share a drive among multiple Core machines. After you create a shared network, you can spin-up a shared drive and attach it to the network in a matter of seconds!

For more information on shared drives, check out the docs!

Self-serve public IPs improvement 

Finally, we've made it a lot easier to claim and assign public IP addresses! While previously it was possible to assign a machine to a public IP after the machine was created, we've now streamlined the process to make it more visible at the team level.

To claim a public IP, simply visit the Public IPs tab in the console and claim the address. (Note that Public IPs are region-specific.)

To assign the new public IP to a machine, all we need to do is use the Assign feature to select the machine we want to expose to the public web. That's all there is to it!

If you get stuck please read the docs to learn more or reach out to us with any questions. 

Bugfixes fix 

  • We resolved a troublesome issue that resulted in erroneous invoices being sent to a small number of users
  • We decreased errors related to over-provisioning on the Paperspace public cluster
  • We improved the strategy for guaranteeing hot nodes and faster startup times on the Paperspace public cluster
  • We fixed a number of small issues related to Windows 10 BYOL machines

All-new high-powered NVIDIA Ampere instances! 🔋

We're pleased to announce a series of new GPU-backed instances available on both Core and Gradient featuring NVIDIA's Ampere microarchitecture!

Introducing all-new Ampere instances! announcement 

Announced in mid-2020, Ampere is the codename for NVIDIA's latest line of GPU accelerator cards. Competition for these cards has been fierce and we're happy to bring you four flavors of Ampere, anchored by the top-of-the-line A100.

Introducing Ampere instances

In addition to the instances listed, we've also introduced 2-way, 4-way, and 8-way configurations for these cards. 

The full table of instances on Paperspace has been updated in the docs. In general, any instance made available on Core will arrive in Gradient shortly thereafter.

Multi-GPU also comes to Windows machines improvement  

One thing you might have noticed already is that multi-GPU instances in Core are no longer exclusive to Linux. You can now spin-up any multi-GPU instance on a Windows machine!

Check out the Paperspace console to get started. 

Model-backed deployments in Gradient Deployments improvement  

We added an important feature to Gradient Deployments: model-backed deployments! 

Gradient Deployments

It's now possible to inject a model at deployment runtime which means Gradient is now able to fetch a model from the Gradient model registry directly. Models can also be referenced from an external S3 bucket.

For more information, read the docs or reach out if you'd like a demo!

State persistence bugs in Gradient Notebooks improvement  

We made substantial improvements to the way that application and cell state is managed in Gradient Notebooks. 

Previously, if you navigated away from a notebook while a cell was running and then returned to the notebook, the cell would sometimes lose its state. We're happy to have implemented a substantial fix to this issue and a number of other issues influencing state management.

If you have feedback for us, please drop us a line!

Show Previous EntriesShow Previous Entries