The various features of the high compression ratio standard provide technicians with a broad space to achieve the best balance between complexity, latency, and other factors that constrain real-time performance.
Digital video encoding for video compression can reduce video capacity as much as possible while maintaining acceptable video quality. However, video compression that is reduced in size to facilitate transmission and storage may sacrifice some image quality. In addition, video compression also requires the processor to have high performance, and to support rich functions in the design, because different types of video applications have different requirements in terms of resolution, bandwidth, and flexibility. The digital signal processor (DSP) with higher flexibility can not only fully meet the above requirements, but also make full use of the rich options provided by the advanced video compression standard to help system developers achieve product optimization.
The inherent structure and complexity of the video codec (codec) algorithm have forced us to adopt optimization schemes. Encoders are very important because they not only have to meet application requirements, but they are also the main part of video applications that perform a lot of processing tasks. Although the encoder is based on information theory, there are still trade-offs between different factors in the implementation process, so it will be very complicated. The encoder should have a high degree of configurability, and can provide a simple and easy-to-use system interface for various video applications and achieve performance optimization, so that developers benefit greatly.
Features of video compression
The transmission or storage of raw digital video requires a lot of space. Advanced video codecs like H.264 / MPEG-4 AVC can achieve compression ratios of up to 60: 1 to 100: 1 and ensure constant throughput, which allows us to use narrower transmission channels for transmission, And can reduce the space occupied by video storage.
Like the JPEG standard in the field of still images, ITU and MPEG video coding algorithms also need to combine discrete conversion coding (DCT or similar technologies), quantization, and variable-length coding to compress the macroblocks in the frame. Once the algorithm establishes the baseline encoding (I frame), only the difference between the visual content or the residual value between them can be encoded to create a large number of subsequent prediction frames (P frames). We can use the so-called motion compensation technology to achieve this inter-frame difference. The algorithm first estimates the position of the previous reference frame macroblock moved into the current frame, and then eliminates redundancy and compresses the remaining part.
Figure 1 shows the structure of a general motion-compensated video encoder. Motion vector (MV) data describes the movement position of each block. This data is created during the estimation phase, which is usually the phase with the highest calculation intensity in the algorithm.
]
Figure 1: Structure diagram of a general motion-compensated video encoder.
Figure 2 shows the P frame (right) and its reference frame (left). Below the P frame, the remaining part (black part) shows the amount of code remaining after calculating the motion vector (blue part).
Figure 2: P frame and reference frame showing the remaining coding amount after calculating the motion vector.
The video compression standard only specifies the bit stream syntax and decoding process, which gives the encoder a lot of room for innovation. Rate control is also an area that can be innovative, enabling the encoder to assign quantization parameters to determine the noise in the video signal in an appropriate manner. In addition, the advanced H.264 / MPEG-4 AVC standard can also provide macroblock size, motion-compensated quarter-pel resolution (quarter-pel resoluTIon), multiple reference frames, bidirectional frame prediction (B-frame), and adaptive loop In-loop deblocking (in-loop deblocking) and other options, which not only improves flexibility but also enhances functions.
Diverse application requirements
Video application requirements vary widely. The various features of the advanced compression standard provide technicians with ample room to achieve the best balance between complexity, latency, and other factors that constrain real-time performance. For example, we can imagine that video telephony, video conferencing, and digital video cameras (DVR) have different requirements for video.
Video call and video conference
For video telephony and video conferencing applications, transmission bandwidth is usually the most important issue. Depending on the link, the bandwidth transmission can range from tens to thousands of kilobytes per second. In some cases, we can ensure the transmission speed, but for the Internet and many intranets, the transmission speed will be very different. Therefore, video conference encoders usually need to meet different types of links and should adapt to the constantly changing available bandwidth in real time. After the receiving system receives the conditions at the receiving end, it should continuously adjust the encoding output to ensure the best video quality with the least possible video interruption. If the conditions are poor, the encoder can take measures such as reducing the average bit rate, skipping frames, or changing the group of pictures (GoP, which is a mixture of I and P frames). I frames are less compressed than P frames, so GoP with fewer I frames requires lower overall bandwidth. Since the visual content of a video conference usually does not change, the number of I frames used can be reduced to make it lower than the level of entertainment applications.
H.264 uses an adaptive loop deblocking filter to process the edge of the block to maintain the smoothness of the video between the current frame and the subsequent frame, thereby improving the video encoding quality, which is especially effective at low bit rates. In addition, turning off the filter can also increase the amount of data visualized at a given bit rate, and can increase the resolution of motion estimation from one-quarter pixel accuracy to one-half or even higher. In some cases, we may need to reduce the quality of deblocking filtering or reduce the resolution, thereby reducing the complexity of the coding work.
Since the Internet's packet provisioning does not guarantee quality, video conferencing can often benefit from coding mechanisms that can improve error tolerance. As shown in Figure 3, the continuous strip of the P frame (progressive strip) can be used for intra-frame coding (I image strip), so that the complete I frame is no longer needed after the initial frame, and can reduce the entire I frame is discarded And the problem of broken images.
Figure 3: Continuous strip images of P frames can be used for intra-frame coding.
Digital video
A digital video camera (DVR) suitable for home entertainment is probably the most widely used real-time video encoder application. For this system, how to achieve the best balance between storage capacity and image quality is a big problem. Unlike video conferences that cannot tolerate delays, if the system has enough memory available for buffering, the compression of the video recording can withstand a certain real-time delay. A design that meets the actual requirements means that the output buffer can handle several frames, which is sufficient to maintain a stable and continuous data flow on the disk. However, in some cases, because the visual information changes very quickly, resulting in a large amount of P frame data generated by the algorithm, the buffer may be blocked. As long as the blocking problem is resolved, the image quality can be improved again.
One of the effective trade-off mechanisms is to change the quantization parameter Qp instantly. Quantization is one of the steps in the final stage of the compressed data algorithm. Increasing quantization can reduce the algorithm's bit rate output, but image distortion will increase in proportion to the square of Qp. Increasing Qp will reduce the algorithm's bit rate output, but it will also affect the image quality. However, since this change occurs in real time, it helps to reduce the phenomenon of frame skipping or screen damage. If the visual content changes very quickly, such as when the buffer is congested, then the image quality is reduced at this time, but it will not be as noticeable as when the content changes slowly. After the visual content returns to a lower bit rate and the buffer is emptied, Qp can be reset to its normal value.
Encoder flexibility
Because developers can use DSPs in various video applications, DSP encoders should be designed with the flexibility of compression standards in mind. For example, encoders based on Texas Instruments (TI) mobile applications OMAP media processors, TMS320C64x + DSP or DaVinci (DaVinci?) Processors are highly flexible. To maximize compression performance, each encoder can be used to take full advantage of the DSP architecture of its platform, including the video and image coprocessor (VICP) built into some processors.
All encoders use a set of basic APIs with default parameters, so no matter what type of system is used, the system interface will not change. The extended API parameters enable the encoder to meet the requirements of specific applications. By default, the parameters can be preset to high quality, and high-speed preset settings are also provided. The program uses extended parameters to override all preset parameters.
The extended parameters enable the application to meet the requirements of H.264 or MPEG-4. The encoder can support several options, such as YUV 4: 2: 2 and YUV 4: 2: 0 input formats, motion compensation with a minimum quarter-pixel resolution, and various I frame intervals (from I frame to No subsequent I-frames after the first I-frame), Qp bit rate control, access to motion vectors, deblocking filter control, simultaneous encoding of two or more channels, I-strip, etc. The encoder can dynamically and unrestrictedly determine the search range of the default motion vector. This technique is an improvement over the fixed range search.
In addition, there is usually an optimal sweet spot, which is the optimal output bit rate for a given input resolution and frames per second (fps). Developers should recognize this optimal point of the encoder, so as to achieve the best design balance between system transmission and image quality in the design scheme.
The 45W MagSafe Power Adapter has a magnetic DC connector, so if someone trips on it, the power cord disconnects harmlessly, keeping your MacBook Air safe. It also helps prevent the cable from fraying or weakening over time. Additionally, the magnetic DC helps guide the plug into the system for a quick and safe connection.
45W Macbook charger air,macbook 45watt charger,45W Macbook pro charger
Shenzhen Waweis Technology Co., Ltd. , https://www.waweis.com