This product has shipping restrictions, so it might have limited shipping options or cannot be shipped to the following countries:
In order to keep our catalog full and to continually expand our selection, SparkFun has partnered with several companies to ship a select list of our products.
Performs high-speed ML inferencing
The Coral Mini PCIe Accelerator is a PCIe module that brings the Edge TPU coprocessor to existing systems and products.
The Edge TPU is a small ASIC designed by Google that provides high performance ML inferencing with low power requirements: it's capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. This on-device processing reduces latency, increases data privacy, and removes the need for constant high-bandwidth connectivity.
The Mini PCIe Accelerator is a half-size Mini PCIe card designed to fit in any standard Mini PCIe slot. This form-factor enables easy integration into ARM and x86 platforms so you can add local ML acceleration to products such as embedded platforms, mini-PCs, and industrial gateways.
Works with Debian Linux
Integrates with any Debian-based Linux system with a compatible card module slot.
Supports TensorFlow Lite
No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
Supports AutoML Vision Edge
Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.
Requirements:
The Coral M.2 Accelerator must be connected to a host computer with the following specifications:
For software required on the host, see the software and operation section.
We welcome your comments and suggestions below. However, if you are looking for solutions to technical questions please see our Technical Assistance page.
No reviews yet.