USB module gives bare metal access to RISC-V AI chip
Cette publication existe aussi en Français
SiFive is working with Kinara to create a USB-based board that allows bare metal development on high performance RISC-V processor cores in an AI chip.
Bare metal is writing code directly for a controller or processor core rather than using a real time operating system or high level operating system such as Linux. Bare metal development is usually needed where there are constrained memory or performance issues such as latency.
The HiFive Xara ‘enablement board’ gives direct access to the SiFive Intelligence X280 processor core in the Kinara Ara-2 edge AI chip, and also includes sample code to allow customers to test out the X280 IP. This also allows the development of customised RISC-V vector software for supporting a wide range of AI workloads, from traditional CNNs to advanced Generative AI and multi-modal vision transformers.
The partnership is made even more interesting by the fact that Kinara is in the process of being acquired by NXP Semiconductor, with the deal expected to be completed next month. NXP microcontrollers and microprocessors are based around the ARM Cortex architecture and it does not currently have a RISC-V chip, although it does use the technology in the EdgeLock A30 secure authenticator and are part of the Quintarius RISC-V development joint venture with Qualcomm, Bosch, Infineon and Nordic Semiconductor.
“We designed the Xara board with the intention of allowing SiFive customers to evaluate the real-time behaviour of the X280 IP before integrating it into their own custom chips,” said Jack Kang, SVP, WW Business Development, Sales, and Customer Experience. “The expanded ability to access Kinara’s Ara-2 processor means developers can also explore the latest in edge AI processing capabilities.”
Kinara’s Ara-2 uses the two SiFive X280 64-bit RISC-V cores for pre- and post-processing tasks, tensor filtering, floating-point functions, and more. Additionally, these SiFive X280 cores connect to high-performance, high-efficiency Kinara NPU cores that are optimized for AI inference at the edge with up to 40 TOPS of performance and include support for generative AI workloads including transformer-based models such as LLaVA, LLaMA, and other Large Language Models (LLMs).
The Xara board will be available to select customers in late Q2 of this year.
staging.kinara.ai/evaluationplatform/; www.sifive.com
If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :
eeNews on Google News
