History of Google (गूगल इतिहास

विश्व पर्यावरण दिवस कब मनाया जाता है और क्यों?

चित्र
विश्व पर्यावरण दिवस   विश्व पर्यावरण दिवस के अवसर पर हमें प्राकृतिक संसाधनों के प्रति संवेदनशीलता और उनके संरक्षण के प्रति प्रतिबद्धता को मजबूत करने का समय है। इस दिवस को याद करते हुए, हमें पर्यावरण संरक्षण के लिए कदम उठाने और स्थायी समृद्धि के दिशानिर्देश निर्धारित करने का संकल्प लेना चाहिए। विश्व पर्यावरण दिवस को हर साल 5 जून को मनाया जाता है। इसकी शुरुआत 1972 में संयुक्त राष्ट्र की पर्यावरण सम्मेलन में हुई थी, जिसमें पर्यावरण संरक्षण को बढ़ावा देने के लिए एक संविधान बनाया गया था। इतिहास पर्यावरण दिवस का इतिहास 1972 में संयुक्त राष्ट्र की पर्यावरण और विकास समिति (UNEP) द्वारा स्थापित किया गया था। यह दिन प्रत्येक वर्ष 5 जून को मनाया जाता है और पर्यावरण संरक्षण की महत्वपूर्णता को जागरूक करने के लिए विश्वभर में उत्साह से मनाया जाता है। यह दिन पर्यावरण संरक्षण के लिए जागरूकता बढ़ाने, कार्यों को संबोधित करने और जागरूकता बढ़ाने का एक अच्छा मौका प्रदान करता है। आयोजन पर्यावरण दिवस के आयोजन में विभिन्न संगठन, सरकारी विभाग और समुदायों द्वारा विशेष कार्यक्रम आयोजित किए जाते हैं। इनमें प्रद...

What is supercomputer with full information (सुपर कंप्यूटर क्या हैं

Supercomputer 

 type of computer that has a higher level of performance than general-purpose computers. Supercomputer performance is usually measured in floating-point operations per second (FLOPS) rather than million instructions per second (MIPS). As of 2017, supercomputers are in existence, which can perform over 10 17 FLOPS (hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). [3] For comparison, the performance of a desktop computer ranges from hundreds of gigaflops (10 11 ) to tens of teraflops (10 13 ). [4] [5] As of November 2017, all 500 of the world's fastest supercomputers run Linux-based operating systems. [6] Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to create faster, more powerful, and technologically superior exascale supercomputers
world first supercomputer

Super Computer Computational Science

 They play an important role in the field of physics, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (calculation of structures and properties) are done. chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the earliest moments of the universe, airplane and spacecraft aerodynamics, detonation of nuclear weapons, and nuclear fusion). They have been essential in the field of cryptanalysis.
A circuit board of an IBM 7030








    




Supercomputers were introduced in the 1960s, and for several decades the fastest computers were built by Seymour Cray at Control Data Corporation (CDC), Cray Research, and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran faster than their more general-purpose contemporaries. Over the course of the decade, increasing amounts of parallelism were added, with one to four processors being common. In the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design in the 1990s. Since then, massively parallel supercomputers with thousands of off-the-shelf processors have become the norm.












The United States has long been a leader in the supercomputer field, first due to Cray's almost uninterrupted dominance of the field, and later through various technology companies. As Japan made major advances in the region in the 1980s and 90s, China became increasingly active in the region. As of May 2022, the fastest supercomputer in the TOP500 supercomputer list is Frontier in the US, with a Linpack benchmark score of 1.102 ExaFlop/s, followed by Fugaku. [11] Five of the top 10 are from the US; China has two; Japan, Finland and France have one each. [12] In June 2018, all supercomputers on the TOP500 list combined broke the 1 exaflops mark.

 History of supercomputers 













 In 1960, UNIVAC built the Livermore Atomic Research Computer (LARC) for the U.S. Naval Research and Development Center, today considered one of the first supercomputers. It still uses high-speed drum memory rather than the newly emerging disk drive technology. [14] Additionally, the first supercomputers included the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which in 1955 requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller, and included the pioneering random access disk drive. The IBM 7030 was completed in 1961 and, despite not meeting the challenge of a hundredfold increase in performance, it was purchased by Los Alamos National Laboratory. Customers in England and France also purchased the computer, and it became the basis for the IBM 7950 Harvest, a supercomputer designed for cryptanalysis.
The third pioneering supercomputer project in the early 1960s was ATLAS at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have a memory space of up to one million words of 48 bits, but because magnetic storage of this capacity was inaccessible, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for an additional 96,000 words. Was. Word. The Atlas operating system swapped data in the form of pages between the magnetic core and the drum. The Atlas operating system also introduced time-sharing into supercomputing, allowing more than one program to be executed on the supercomputer at any one time. [16] Atlas was a joint venture between Ferranti and the University of Manchester and was designed to operate at processing speeds of up to one microsecond per instruction, or approximately one million instructions per second.











The CDC 6600, designed by Seymour Cray, was released in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run faster and the problem of overheating was solved by incorporating refrigeration into supercomputer designs. [18] Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 performed approximately 10 times better than all other contemporary computers, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold for $8 million each.

 Cray left CDC in 1972 to form his own company, Cray Research. [20] Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history.
The Cray-2 was released in 1985. It contained eight central processing units (CPUs), liquid cooling, and electronics coolant pumped through liquid Fluorinert supercomputer architecture. It reached 1.9 gigaflops, making it the first supercomputer to break the gigaflop barrier.
The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a truly massively parallel computer, in which multiple processors worked together to solve different parts of a single larger problem. Unlike vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer feeds different pieces of data into completely different processors and then processes the results. Recombines. The ILLIAC design was finalized in 1966 with 256 processors and provided speeds of up to 1 GFLOPS, while the 1970s Cray-1 had a maximum speed of 250 MFLOPS. However, due to development problems only 64 processors were built, and the system could never operate faster than about 200 MFLOPS, despite being much larger and more complex than the Cray. The second problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.

But the partial success of ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you use? Two strong oxen or 1024 chickens?" [26] But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably the Connection Machine (CM) which had evolved from research at MIT. The CM-1 used 65,536 simplified custom microprocessors linked together in a network to share data. Several updated versions followed; The CM-5 supercomputer is a massively parallel processing computer capable of performing several billion arithmetic operations per second.
In 1982, Osaka University's Lynx-1 computer graphics system used a massively parallel processing architecture with 514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It was mainly used to render realistic 3D computer graphics.
Fujitsu's VPP500 from 1992 is unusual because, to achieve higher speeds, its processor used GaAs, a material normally reserved for microwave applications due to its toxicity. [29] Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to achieve the top spot in 1994 with a peak speed of 1.7 gigaflops (GFLOPS) per processor. [30] [31] The Hitachi SR2201 achieved a peak performance of 600 GFLOPS in 1996 using 2048 processors connected via a fast three-dimensional crossbar network.
The Intel Paragon can contain from 1000 to 4000 Intel i860 processors in various configurations and in 1993 was ranked the fastest in the world. The Paragon was a MIMD machine that linked processors via a high-speed two-dimensional mesh, communicating via a message passing interface, allowing processes to execute on separate nodes.
Software development remained a problem, but the CM series promoted considerable research on the issue. Similar designs using custom hardware were created by several companies, including Evans & Sutherland ES-1, Maspar, NCUBE, Intel IPSec, and Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much that a supercomputer could be built using custom chips instead of using them as individual processing units. By the turn of the 21st century, designs with thousands of commodity CPUs became the norm, with later machines adding graphic units to the mix.
In 1998, David Bader developed the first Linux supercomputer using commodity parts.
While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype used an Alta Technologies "UltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for National Computational Science Alliance (NCSA) members' code, along with necessary components to ensure interoperability, as none of it previously ran on Linux. Was.Using successful prototype designs, he led the development of "Roadrunner", the first Linux supercomputer for open use by the national science and engineering community through the National Science Foundation's National Technology Grid. The Roadrunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world.Although Linux-based clusters using consumer-grade parts, such as Beowulf, existed before Bader's prototypes and the development of the Roadrunner, they lacked the scalability, bandwidth, and parallel computing capabilities to be considered "true" supercomputers. ,

Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the processing power of many computers, organized as distributed, diverse administrative domains, is opportunistically used whenever a computer is available.In another approach, many processors are used in proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.

As the price, performance, and energy efficiency of general-purpose graphics processing units (GPGPUs) have improved, many petaflops supercomputers such as Tianhe-i and Nebula have begun to rely on them.

However, other systems such as K computers continue to use traditional processors such as SPARC-based designs and the overall applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, while a GPGPU may be expected to score well on specific benchmarks. can be tuned for, but its overall applicability to everyday algorithms may be limited unless significant effort is made to tune the application accordingly However, GPUs are gaining popularity, and in 2012 the Jaguar supercomputer was converted into a Titan by retrofitting the CPU with a GPU.
High-performance computers have an expected life cycle of about three years before requiring an upgrade.The Gyoukou supercomputer is unique in that it uses both massively parallel design and liquid immersion cooling.

टिप्पणियाँ

इस ब्लॉग से लोकप्रिय पोस्ट

What is Artificial Intelligence?(आर्टिफिशियल इंटेलिजेंस क्या है?)

History of uttrakhand (उत्तराखंड का इतिहास पौराणिक काल जितना पुराना है

How many rights are there in total?(कुल कितने अधिकार हैं?)