The GPU is the "heart" of the display card, which is equivalent to the role of the CPU in the computer. It determines the grade and most of the performance of the graphics card, and is also the basis for the difference between the 2D graphics card and the 3D graphics card. The 2D display chip mainly relies on the processing power of the CPU when processing 3D images and special effects, and is called "soft acceleration". The 3D display chip concentrates the three-dimensional image and special effects processing functions on the display chip, which is also called the "hardware acceleration" function. The display chip is usually the largest chip on the display card (also the most pin-up). The GPU reduces the graphics card's reliance on the CPU and does some of the original CPU work, especially in 3D graphics processing. The core technologies used by GPUs include hardware T&L, cube environment texture mapping and vertex blending, texture compression and bump mapping, dual texture four-pixel 256-bit rendering engine, and hardware T&L technology can be said to be the GPU logo.
The GPU can support T&L (TransformandLighTIng, polygon conversion and light source processing) display chips from the hardware, because T&L is an important part of 3D rendering. Its function is to calculate the 3D position of the polygon and process the dynamic light effect. It can also be called “ Geometric processing". A good T&L unit can provide detailed 3D objects and advanced lighting effects; except for most PCs, most of T&L's operations are handled by the CPU (this is called software T&L), due to the CPU There are many tasks, in addition to T&L, it also needs to do non-3D graphics processing such as memory management, input response, etc., so the performance will be greatly reduced in the actual operation, often the graphics card waits for CPU data, its operation speed can not keep up. Today's complex 3D game requirements. Even if the CPU's operating frequency exceeds 1 GHz or higher, it does not help much, because this is a problem caused by the design of the PC itself, and has little to do with the speed of the CPU.
main effect
Today, GPU is no longer limited to 3D graphics processing. The development of GPU general-purpose computing technology has attracted a lot of attention in the industry. It turns out that GPU can provide dozens of times or even more on floating-point computing and parallel computing. Hundreds of times the performance of the CPU, such a powerful "new star" will inevitably make the CPU maker boss Intel nervous for the future, NVIDIA and Intel also often engage in a war of words for the CPU and GPU who are more important. The standards for GPU general-purpose computing currently include OPEN CL, CUDA, and ATI STREAM. Among them, OpenCL (full name Open CompuTIng Language, open computing language) is the first open, free standard for general purpose parallel programming of heterogeneous systems, and is a unified programming environment, which is convenient for software developers to be high-performance computing servers and desktops. Computing systems, handheld devices that write efficient and lightweight code, and are widely used in multi-core processors (CPUs), graphics processing units (GPUs), Cell-type architectures, and other parallel processors such as digital signal processors (DSPs) in games, Entertainment, scientific research, medical and other fields have broad prospects for development. AMD-ATI and NVIDIA's current products all support OPEN CL. When NVIDIA released the GeForce 256 graphics processing chip in 1999, it first proposed the concept of GPU. Since then, the core of the NV graphics card has been referred to by this new name GPU. The GPU reduces the graphics card's reliance on the CPU and does some of the original CPU work, especially in 3D graphics processing. The core technologies used by GPUs include hardware T&L, cube environment texture mapping and vertex blending, texture compression and bump mapping, dual texture four-pixel 256-bit rendering engine, and hardware T&L technology can be said to be the GPU logo.
working principle
Simply put, GPU is a display chip that can support T&L (Transform and Lighting) from hardware. Because T&L is an important part of 3D rendering, its function is to calculate the 3D position of the polygon and process the dynamic light effect. It can also be called "geometric processing". A good T&L unit can provide detailed 3D objects and advanced lighting effects; except for most PCs, most of T&L's operations are handled by the CPU (this is called software T&L), due to the CPU There are many tasks, in addition to T&L, it also needs to do non-3D graphics processing such as memory management, input response, etc., so the performance will be greatly reduced in the actual operation, often the graphics card waits for CPU data, its operation speed can not keep up. Today's complex 3D game requirements. Even if the CPU's operating frequency exceeds 1 GHz or higher, it does not help much, because this is a problem caused by the design of the PC itself, and has little to do with the speed of the CPU.
GPU and DSP difference
The GPU differs from DSP (Digital Signal Processing, DSP for short) in several major aspects. All of its calculations use floating-point arithmetic, and there are currently no bit or integer arithmetic instructions. In addition, because the GPU is dedicated to images. Processing the design, so the storage system is actually a two-dimensional segmented storage space, including a segment number (from which the image is read) and a two-dimensional address (the X, Y coordinates in the image). In addition, there is no indirect write command. The output write address is determined by the rasterizer and cannot be changed by the program. This is a great challenge for algorithms that are naturally distributed in memory. Finally, communication between different fragments is not allowed. In fact, The fragmentation processor is a SIMD data parallel execution unit that executes code independently in all fragments.
Despite the above constraints, the GPU can efficiently perform a variety of operations, from linear algebra and signal processing to numerical simulation. Although the concept is simple, new users will still be confused when using GPU computing because GPUs require proprietary graphics knowledge. In this case, some software tools can help. Two advanced shading languages, CG and HLSL, allow users to write C-like code and then compile it into a fragmented assembly language. Brook is a high-level language designed for GPU computing and does not require graphical knowledge. So it's a good starting point for the first time developers working with GPUs. Brook is an extension of the C language that integrates a simple data parallel programming construct that maps directly to the GPU. Data stored and manipulated by the GPU is visually compared to a "stream", similar to an array in standard C. Kernel is a function that operates on a stream. Calling a core function on a series of input streams means that an implicit loop is implemented on the stream element, that is, the core body is called for each stream element. Brook also provides a reduction mechanism, such as sum, maximum, or product calculations for all elements in a stream. Brook also completely hides all the details of the graphics API and virtualizes parts of the GPU that are unfamiliar to many users like two-dimensional memory systems. Applications written in Brook include linear algebra subroutines, fast Fourier transforms, ray tracing, and image processing. With ATI's X800XT and Nvidia's GeForce 6800 Ultra GPU, many of these applications are up to seven times faster with the same cache and SSE assembly-optimized Pentium 4 execution.
Users interested in GPU computing strive to map algorithms to graphical primitives. The advent of high-level programming languages ​​like Brook makes it easy for novice programmers to grasp the performance benefits of GPUs. The convenience of accessing GPU computing also makes the evolution of the GPU continue, not only as a rendering engine, but as the main computing engine for personal computers.
What is the difference between GPU and CPU?
To explain the difference between the two, we must first understand the similarities between the two: both have bus and external connections, have their own cache system, and digital and logical units. In a word, both are designed to accomplish computing tasks.
The difference between the two is the difference in the structure of the on-chip cache system and the digital logic unit: although the CPU has multiple cores, the total does not exceed two digits, each core has a large enough cache and enough numbers and logic. The arithmetic unit, and assists the hardware with many acceleration branches to judge even more complicated logic judgment; the GPU has a much higher number of cores than the CPU, and is called a many-core (NVIDIA Fermi has 512 cores). Each core has a relatively small cache size, and the number of digital logic units is small and simple (the GPU is initially weaker than the CPU in floating point calculations). As a result, the CPU is good at handling computational tasks with complex computational steps and complex data dependencies, such as distributed computing, data compression, artificial intelligence, physical simulation, and many other computing tasks. For historical reasons, GPUs are produced for video games (the main driving force to date is the growing video game market). One type of operation that often occurs in 3D games is to perform the same operations on massive data, such as: A vertex performs the same coordinate transformation, and the color value is calculated for each vertex according to the same illumination model. The GPU's many-core architecture is ideal for sending the same instruction stream in parallel to many cores, using different input data. Around 2003-2004, domain experts outside of graphics began to notice the GPU's unique computing power and began experimenting with GPUs for general purpose computing (ie, GPGPU). After NVIDIA released CUDA, AMD and Apple also released OpenCL, GPU began to be widely used in general computing, including: numerical analysis, massive data processing (sorting, Map-Red, etc.), financial analysis and so on.
In short, when programmers write programs for CPUs, they tend to use complex logic structure optimization algorithms to reduce the runtime of computing tasks, namely Latency. When programmers write programs for the GPU, they use the advantages of processing massive amounts of data to mask Lantency by increasing the total data throughput (Throughput). At present, the difference between CPU and GPU is gradually shrinking, because GPU has also made great progress in handling irregular tasks and inter-thread communication. In addition, power consumption issues are more serious for GPUs than CPUs.
In general, the difference between GPU and CPU is a big topic, and it can even be spent on a semester with 32 lessons and dozens of lectures.
201-400Kva Diesel Generator,Super Silent Power Generator,400Kva Diesel Generator,Super Silent Type Diesel Generator
Shanghai Kosta Electric Co., Ltd. , https://www.kostagenerators.com