top of page

Neuronspike Technologies

 

We are fabless semiconductor company developing brain-inspired AI chipsets for generative AI models. Our first chip, Neuronspike Moore, has up to 21x faster speed comparing to existing processors in the market.

 

Our chip designs are based on compute-in-memory architecture where computations happen within the memory. This allows ultra high throughput computations on our chips.

 

One Neuronspike Moore chip can achieve throughput performance of 4 A100 Nvidia GPUs in generative AI.

 

We will soon start accepting pre-orders and partnerships.

Mission

Our mission is to develop fast and efficient chipsets to help enterprises to create the future and improve lives using artificial intelligence.

Performance comparison on Llama-7B model

Inference with Neuronspike Moore chips

Inference with Nvidia A100 chips

Towards AGI

Generative AI models and multi-modal AI models will potentially lead to versatile artificial general intelligence where machine can reason, perform visual, language, and decision-making tasks. However, these models have risen in size and expected to grow by 1000x in next 3 years. 

This creates the need of solution to memory wall in microprocessors. Meaning, memory bandwidth/speed limits the computational throughput of processors systems due to requirement of moving large data within processor systems. 

 

Compute-in-memory architecture offers promising solution to memory wall. This means computations happens on memory instead of moving the data around, thus resulting in more than 20x performance gains in memory-bound computations like in generative AI.  

supported by

HKSTP_Logo_Web_E_edited.png
hkx_edited.png
p2-2-1.jpg
Harvard_Innovation_Lab_logo.png
hkust-logo-new.png
bottom of page