<rp id="lgtka"></rp>

<cite id="lgtka"></cite>

        <rp id="lgtka"></rp>
      1. <tt id="lgtka"><noscript id="lgtka"></noscript></tt>

        <b id="lgtka"><span id="lgtka"></span></b>

        NVIDIA Tesla P100

        The World's First AI Supercomputing Data Center GPU

        香蕉视频app免次数版下载最新

        INFINITE COMPUTE POWER FOR THE
        MODERN DATA CENTER

        Today's data centers rely on many interconnected commodity compute nodes, which limits high performance computing (HPC) and hyperscale workloads. NVIDIA? TeslaP100 taps into NVIDIA Pascal? GPU architecture to deliver a unified platform for accelerating both HPC and AI, dramatically increasing throughput while also reducing costs.

        A NEW LEVEL OF APPLICATION PERFORMANCE

        With over 600 HPC applications accelerated—including 15 out of the top 15—and all deep learning frameworks, Tesla P100 with NVIDIA NVLink delivers up to a 50X performance boost.

        Tesla P100 Application Speed-Up Performance Chart

        FEATURES AND BENEFITS

        Tesla P100 is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest compute node.

        NVIDIA Pascal? Architecture

        Exponential Performance Leap with Pascal Architecture

        The NVIDIA Pascal architecture enables the Tesla P100 to deliver superior performance for HPC and hyperscale workloads. With more than 21 teraFLOPS of 16-bit floating-point (FP16) performance, Pascal is optimized to drive exciting new possibilities in deep learning applications. Pascal also delivers over 5 and 10 teraFLOPS of double- and single-precision performance for HPC workloads.

        Tesla P100 Unprecedented Efficiency with CoWoS with HBM2

        Unprecedented Efficiency with CoWoS with HBM2

        Tesla P100 tightly integrates compute and data on the same package by adding chip-on-wafer-on-substrate (CoWoS) with HBM2 technology to deliver 3X more memory performance over the NVIDIA Maxwell? architecture. This provides a generational leap in time to solution for data-intensive applications.

        NVIDIA NVLink? high-speed bidirectional interconnect

        Applications at Massive Scale with NVIDIA NVLink

        Performance is often throttled by the interconnect. The revolutionary NVIDIA NVLink high-speed, bidirectional interconnect is designed to scale applications across multiple GPUs by delivering 5X higher performance compared to today's best-in-class technology.

        Page Migration Engine

        Simpler Programming with Page Migration Engine

        Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU's physical memory size to virtually limitless amounts of memory.

        TESLA P100 PRODUCTS

        NVIDIA Tesla P100 for Strong-Scale HPC

        NVIDIA Tesla P100 for Strong-Scale HPC

        Tesla P100 with NVIDIA NVLink technology enables lightning-fast nodes to substantially accelerate time to solution for strong-scale applications. A server node with NVLink can interconnect up to eight Tesla P100s at 5X the bandwidth of PCIe. It's designed to help solve the world's most important challenges that have infinite compute needs in HPC and deep learning.

        NVIDIA Tesla P100 for  Strong-Scale HPC

        NVIDIA Tesla P100 for Strong-Scale HPC

        Tesla P100 with NVIDIA NVLink technology enables lightning-fast nodes to substantially accelerate time to solution for strong-scale applications. A server node with NVLink can interconnect up to eight Tesla P100s at 5X the bandwidth of PCIe. It's designed to help solve the world's most important challenges that have infinite compute needs in HPC and deep learning.

        NVIDIA Tesla P100 for Mixed-Workload HPC

        NVIDIA Tesla P100 for Mixed-Workload HPC

        Tesla P100 for PCIe enables mixed-workload HPC data centers to realize a dramatic jump in throughput while saving money. For example, a single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications. Completing all the jobs with far fewer powerful nodes means that customers can save up to 70 percent in overall data center costs.

        NVIDIA Tesla P100 for Mixed-Workload HPC

        NVIDIA Tesla P100 for Mixed-Workload HPC

        Tesla P100 for PCIe enables mixed-workload HPC data centers to realize a dramatic jump in throughput while saving money. For example, a single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications. Completing all the jobs with far fewer powerful nodes means that customers can save up to 70 percent in overall data center costs.

        PERFORMANCE SPECIFICATION

        P100 for PCIe-Based Servers P100 for NVLink-Optimized Servers
        Double-Precision Performance 4.7 teraFLOPS 5.3 teraFLOPS
        Single-Precision Performance 9.3 teraFLOPS 10.6 teraFLOPS
        Half-Precision Performance 18.7 teraFLOPS 21.2 teraFLOPS
        NVIDIA NVLink Interconnect Bandwidth - 160 GB/s
        PCIe x16 Interconnect Bandwidth 32 GB/s 32 GB/s
        CoWoS HBM2 Stacked Memory Capacity 16 GB or 12 GB 16 GB
        CoWoS HBM2 Stacked Memory Bandwidth 732 GB/s or 549 GB/s 732 GB/s
        Enhanced Programmability with Page Migration Engine checkbox checkbox
        ECC Protection for Reliability checkbox checkbox
        Server-Optimized for Data Center Deployment checkbox checkbox

        PRODUCT LITERATURE

        TAKE A FREE TEST DRIVE

        The World's Fastest GPU Accelerators for HPC and
        Deep Learning.

        WHERE TO BUY

        Find an NVIDIA Accelerated Computing Partner through our
        NVIDIA Partner Network (NPN).