SHARE
Intel oneAPI Toolkits

In December 2020, Intel Software released the Intel oneAPI Toolkits, which include a complete and diverse set of oneAPI-initiative compliant tools (compilers, libraries, pre-optimised frameworks, analysers, debuggers) for HPC, IoT and Rendering.

The tools that comprised Intel Parallel Studio XE and Intel System Studio are now integrated into Intel’s next-generation oneAPI products. The Intel oneAPI Toolkits are upward compatible supersets of the features in the previous studio products. Essentially, Parallel and Studio customers get the tools they know and love, plus much more with these new Toolkits!

What’s new?

This is the first update since the Toolkits were launched. We’ve highlighted some of the key updates below.

AI and HPC enhancements

  • Enhancements for XGBoost and Scikit-learn performance
  • Upgrades to Intel Optimisation for TensorFlow and Intel Optimisation for PyTorch to support effective quantisation
  • Intel MPI Library expanded container support
  • Enhanced offload modeling analysis and GPU roofline capabilities
  • New I/O analysis to examine hardware utilisation at a glance

Media acceleration, ray tracing, and rendering enhancements

  • Voxel Database (VDB) multi-attribute volume support
  • An enhanced denoising library that supports directional lightmaps and Apple Silicon
  • Library GUI enhancements including improvements to scene lighting, animation and skinning of textures, and scene file load+save
  • Media support for faster transcoding and streaming
  • Expanded support for Intel GPUs including Intel Iris Xe and Iris Xe MAX Graphics

Find out more

We’re hosting a three-part virtual series 27-29 April, in partnership with Intel Software and Bayncore, looking at how you can take advantage of the Toolkits.

Session 1: Porting Intel Parallel Studio code to Intel oneAPI – how to make the right choice.

In this session, we look at the Intel oneAPI Toolkits and discuss different migration strategies you can adopt in moving from Intel Parallel Studio to Intel oneAPI.

This session includes a demonstration of the new Intel oneAPI compilers and how they complement the ‘classic’ compilers that are included in the Intel oneAPI toolkits.

Session 2: Introducing heterogeneity to legacy applications using Intel oneAPI.

One of the significant features that is supported in Intel oneAPI is the ability to speed up an application by ‘offloading’ part of the code onto accelerators – such as the embedded HD Graphics, GPUs, or FPGAs. In this session, we show you how to identify suitable candidates in existing legacy applications – such as one written in C/C++ or Fortran – and how they can be accelerated using offloading techniques.

Session 3: Optimised deployment of convolutional neural networks (CNN) using the Intel Distribution of OpenVINO™.

In the third and final session, we look at how to deploy applications and solutions that use deep learning intelligence with the Intel Distribution of OpenVINO™ toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel hardware (including accelerators) and maximises performance.

Sign up now.

You can also find out more about Intel oneAPI in our FAQ blog series.

Grey Matter is proud to be an Intel Software Elite Reseller. We can support you with any licensing enquiries that you have. Contact us for a free trial or quote; call our licensing specialist on +44 (0) 1364 655 123 or email intel@greymatter.com.

Find out more about how Grey Matter Ltd can help you with this subject. Send us a message:

We’d love to be able to send you news about our great offers and the latest info.

Yes keep me updated please