Skip to main content

Table 6 Different levels of computing power and AI technologies

From: Artificial intelligence tool development: what clinicians need to know?

No

Specification of computer

Level of AI technology

1.

Entry-level CPU

(e.g., Intel Core i3, AMD Ryzen 3)

Basic machine learning algorithms, such as linear regression or decision trees, for small-scale data analysis and prediction tasks

2.

Mid-range CPU

(e.g., Intel Core i5, AMD Ryzen 5)

More advanced machine learning algorithms including neural networks for tasks such as image recognition, natural language processing and recommendation systems

3.

High-end CPU

(e.g., Intel Core i7/i9, AMD Ryzen 7/9, Apple M3 Pro )

High-performance computing (HPC) for training complex deep learning models on large datasets such as those used in medical imaging, autonomous vehicles and financial modelling

4.

Entry-level GPU

(e.g., NVIDIA GeForce GTX 1650, AMD Radeon RX 550, Intel Arc A380 )

Accelerated computing for training and inference of machine learning models particularly for tasks involving parallel processing such as computer vision, speech recognition and gaming. In general, GPUs are not optimised for AI/ML tools, consume lots of energy and may become outdated in short time

5.

Mid-range GPU

(e.g., NVIDIA GeForce RTX 2060, NVIDIA RTX 3060/4060, AMD Radeon RX 5600 XT, AMD RX 6700 XT )

Deep learning training and inference for applications requiring higher computational power and memory bandwidth such as real-time video analytics, virtual reality and autonomous drones

6.

High-end GPU

(e.g., NVIDIA GeForce RTX 3080, AMD Radeon RX 6900 XT, AMD RX 7900 XTX, Apple M3 Ultra )

State-of-the-art deep learning research, training of large-scale models (e.g. GPT, BERT), and deployment of AI applications in industries like healthcare, finance and cybersecurity

7.

FPGA (e.g., Xilinx, Intel Stratix 10)

Customised hardware acceleration for specific AI tasks such as real-time inferencing in edge devices, network optimisation and hardware emulation of neural networks

8.

ASIC (e.g., Google TPU v5e, Graphcore IPU, Tesla Dojo)

Specialised microchips optimised for AI workloads, offering unparalleled performance and energy efficiency for tasks like neural network training and inference in data centres and edge devices

  1. The content of the table was adapted from that given by ChatGPT3.5
  2. ASIC Application-Specific Integrated Circuit, BERT Bidirectional Encoder Representations from Transformers, CPU central processing unit, FPGA Field-Programmable Gate Array, GPU graphics processing unit, GPT Generative Pre-trained Transformer. The content of the table was adapted from that given by ChatGPT3.5, and 4o