Ultra-Low Latency Video & Audio Transmission/Receiving System on FPGA Platform
Achieving low latency in real-time audio/video streaming applications is a challenging design goal many OEMs face today. It could be a "game-changer" that can significantly improve the performance of various real-time applications such as that in aerospace, defense, industrial, medical, broadcasting and automotive realms. High latency is also a huge drawback in existing video surveillance systems, industrial automation and applications that involve real-time interaction with video contents like online video streaming. But, how can the latency issue in real-time applications be minimized? At iWave, we have the solution...
|Zynq® UltraScale+™ MPSoC SOM|
Achieving low latency in real-time audio/video streaming applications is a challenging design goal many OEMs face today. It could be a “game-changer” that can significantly improve the performance of various real-time applications such as that in aerospace, defence, industrial, medical, broadcasting and automotive realms. High latency is also a huge drawback in existing video surveillance systems, industrial automation and applications that involve real-time interaction with video contents like online video streaming. But, how can the latency issue in real-time applications be minimized?
At iWave, we have the solution...
Our dedicated team of skilled engineers have leveraged years of experience and skills in FPGA platform, to successfully design and develop an ultra-low latency video & audio transmission system, for various industrial and commercial applications. This remarkable solution is realized on iWave’s state-of-the-art FPGA System on Module (SOM) that integrates high-performance ARM®+ FPGA architecture in a single platform. Powered by Xilinx® Zynq® UltraScale+™ MPSoC, this ultra-high performance SOM is capable of rendering low latency video & audio transmission with up to 4K resolution over Ethernet in real time. To enhance this innovation further, we have also planned to club this solution with Machine Vision algorithm for real time analysis of video & audio.
A peek into the features of our FPGA SOM
The SOM combines a feature rich Quad / Dual Cortex® A53 application processor, a Dual Cortex® R5 real time processor, ARM® Mali 400 MP2 Graphics Processing Unit with industry-leading 16nm FinFET+ programmable logic cells ranging from 192k to 504k, all realized in a single, highly-integrated compact (95mm x 75mm) module. Zynq® UltraScale+™ MPSoC SOM provides 64-bit processor scalability and expandable 2GB DDR4 FPGA memory, making it ideal for image, video and signal processing applications with up to Full HD resolution.
The SOM also features 16 channels GTH transceivers up to 16.3 Gbps and 4 channels GTR transceivers up to 6 Gbps that enables high speed on-board connectivity peripherals such as PCIe, USB3.0, SATA3.1, display port and Gigabit Ethernet.
How low latency application works with our FPGA SOM
The hardware platform receives the audio/ video information through HDMI input streams, which are then handled by Xilinx® HDMI RX subsystem IPs. Audio streams are fed to AAC (Audio CODEC -iWave’s proprietary development) implemented in R5 core of SoC. The video streams are fed to low latency H.264 Encoder IP-(Developed by iWave’s partner “Revatron”).
Audio & Video streams are encoded from the respective encoder and are fed to the MPEG TS (Transport Stream) module (MPEG TS transmission and reception module are iWave’s proprietary IP).Here the encoded streams are packetized as per MPEG transport stream standard.
MPEG TS output is given to video packet handler which appends TCP/IP or UDP header to MPEG TS packets. The TCP/IP or UDP packets are sent over 1G link (MAC) using RJ45 cable. 1G link is used to stream Full HD resolution and 10G link is used to stream 4K resolution. The other end of the RJ45 is connected to a remote PC where VLC media player is used to stream the encoded audio & video.
Reverse operation is also under development:
Receive the encoded audio & video stream from network in MPEG TS format.
Video packet handler passes the audio & video packets to respective decoders (audio to AAC decoder & video streams to H.264 decoder).
Recovered video and audio from respective decoders are fed to HDMI TX subsystem IP from Xilinx®.
The output from SOM is displayed on HDMI monitor (supports up to 4K).
Roadmap of iWave’s Solution
The solution is currently in prototype validation mode.
First prototype will be released by early of first quarter (Q1) of 2019.
Production will start from second quarter (Q2) of 2019.
iWave, in collaboration with Revatron has facilitated a unique FPGA solution, that helps reduce latency levels in real-time applications. OEMs can now benefit this innovative solution for their designs and achieve exceptional improvement in systems performance and efficiency. To meet the increasing industrial requirements, we have developed the solution in a highly versatile FPGA platform that can support maximum reuse and best time to market, enabling developers to design and build a highly efficient and reliable system that meets demanding industry standards.
Revatron is a leading Japanese AI solutions company focusing on developing revolutionary real-time AI applications. Among their notable initiatives is the collaboration with Japanese National Lab for the development of a prototype multiple-viewing-angle display system. They have successfully developed a 3D Rendering and 3D Representation technology with Tokyo Research Institute and pioneered the development of a revolutionary AI 2.0 powered video surveillance system. Another breakthrough innovation by Revatron is “See and Tell” Camera that sends text messages of what it sees based on proprietary DOORs (Direct Object-Oriented Reality system) technology .