Intel Core i7 3770 - 3.4 GHz - 4 cores - 8 threads - 8 MB cache - LGA1155 Socket - Box

Intel Core i7 3770 - 3.4 GHz - 4 cores - 8 threads - 8 MB cache - LGA1155 Socket - Box

NobleTec SOURCING MIXE EOL INTEL XEON E5-2609V2

NobleTec SOURCING MIXE EOL INTEL XEON E5-2609V2

QNAP Mustang-F100 interface cards/adapter Internal

$1,721.63
Mustang-F100, 8 GB DDR4, PCI Express x8, 169.5x68.7x33.7 mm
Availability: 0
SKU
MUSTANG-F100-A10-R10
QNAP Mustang-F100. Host interface: PCIe, Expansion card form factor: Low-profile, Expansion card standard: PCIe 3.0. Product colour: Black, Grey, Cooling type: Active, Number of fans: 2 fan(s). Chipset: Intel Arria 10 GX1150 FPGA. Power consumption (typical): 60 W. Width: 169.5 mm, Depth: 68.7 mm, Height: 33.7 mm

As QNAP NAS evolves to support a wider range of applications (including surveillance, virtualization, and AI) you not only need more storage space on your NAS, but also require the NAS to have greater power to optimize targeted workloads. The Mustang-F100 is a PCIe-based accelerator card using the programmable Intel® Arria® 10 FPGA that provides the performance and versatility of FPGA acceleration. It can be installed in a PC or compatible QNAP NAS to boost performance as a perfect choice for AI deep learning inference workloads. OpenVINO™ toolkit OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel®-hardware heterogeneously such as CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA. Get deep learning acceleration on Intel-based Server/PC You can insert the Mustang-F100 into a PC/workstation running Linux® (Ubuntu®) to acquire computational acceleration for optimal application performance such as deep learning inference, video streaming, and data center. As an ideal acceleration solution for real-time AI inference, the Mustang-F100 can also work with Intel® OpenVINO™ toolkit to optimize inference workloads for image classification and computer vision. QNAP NAS as an Inference Server OpenVINO™ toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance. When used with QNAP’s OpenVINO™ Workflow Consolidation Tool, the Intel®-based QNAP NAS presents an ideal Inference Server that assists organizations in quickly building an inference system. Providing a model optimizer and inference engine, the OpenVINO™ toolkit is easy to use and flexible for high-performance, low-latency computer vision that improves deep learning inference. AI developers can deploy trained models on a QNAP NAS for inference, and install the Mustang-F100 to achieve optimal performance for running inference.
More Information
Manufacturer QNAP
Stock Out of Stock
Internal Yes
Colour of product Black, Grey
Cooling type Active
Number of fans 2 fan(s)
Host interface PCIe
Expansion card form factor Low-profile
Expansion card standard PCIe 3.0
Chipset Intel Arria 10 GX1150 FPGA
Power consumption (typical) 60 W
Operating temperature (T-T) 5 - 60 °C
Operating relative humidity (H-H) 5 - 90%
Width 169.5 mm
Depth 68.7 mm
Height 33.7 mm
Quantity 1
Harmonized System (HS) code 84733020
Specs

General


Categories - [ 126 ]
Manufacturer - QNAP

Attributes


Stock Custom - Out of Stock

Design


Internal - Yes
Colour of product - Black, Grey
Cooling type - Active
Number of fans - 2 fan(s)

Ports & interfaces


Host interface - PCIe
Expansion card form factor - Low-profile
Expansion card standard - PCIe 3.0

Features


Chipset - Intel Arria 10 GX1150 FPGA

Power


Power consumption (typical) - 60 W

Operational conditions


Operating temperature (T-T) - 5 - 60 \u00b0C
Operating relative humidity (H-H) - 5 - 90%

Weight & dimensions


Width - 169.5 mm
Depth - 68.7 mm
Height - 33.7 mm

Packaging data


Quantity - 1

Logistics data


Harmonized System (HS) code - 84733020