diff --git a/content/blog/1303-2.md b/content/blog/1303-2.md new file mode 100644 index 0000000..5b1580d --- /dev/null +++ b/content/blog/1303-2.md @@ -0,0 +1,30 @@ +--- +title: "Changing the Game: Accelerating Applications and Improving Performance For Greater Data Center Efficiency" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Abstract + +Planning for exascale, accelerating time to discovery and extracting results from massive data sets requires organizations to continually seek faster and more efficient solutions to provision I/O and accelerate applications.  New burst buffer technologies are being introduced to address the long-standing challenges associated with the overprovisioning of storage by decoupling I/O performance from capacity. Some of these solutions allow large datasets to be moved out of HDD storage and into memory quickly and efficiently. Then, data can be moved back to HDD storage once processing is complete much more efficiently with unique algorithms that align small and large writes into streams, thus enabling users to implement the largest, most economical HDDs to hold capacity. + +This type of approach can significantly reduce power consumption, increase data center density and lower system cost. It can also boost data center efficiency by reducing hardware, power, floor space and the number of components to manage and maintain. Providing massive application acceleration can also greatly increase compute ROI by returning wasted processing cycles to compute that were previously managing storage activities or waiting for I/O from spinning disk. + +This session will explain how the latest burst buffer cache and I/O accelerator applications can enable organizations to separate the provisioning of peak and sustained performance requirements with up to 70 percent greater operational efficiency and cost savings than utilizing exclusively disk-based parallel file systems via a non-vendor-captive software-based approach. + +### Speaker Bio + +[Jeff Sisilli](https://www.linkedin.com/profile/view?id=5907154&authType=NAME_SEARCH&authToken=pSpl&locale=en_US&srchid=32272301421438011111&srchindex=1&srchtotal=1&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438011111%2CVSRPtargetId%3A5907154%2CVSRPcmpt%3Aprimary), senior director of product marketing at DataDirect Networks, has over 12 years experience creating and driving enterprise hardware, software and professional services offerings and effectively bringing them to market. Jeff is often quoted in storage industry publications for his expertise in software-defined storage and moving beyond traditional approaches to decouple performance from capacity. + +### Speaker Organization + +DataDirect Networks + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Sisilli_OPFS2015_031815.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/2018-openpower-capi-and-opencapi-heterogeneous-computing-design-contest-build-your-own-super-processor.md b/content/blog/2018-openpower-capi-and-opencapi-heterogeneous-computing-design-contest-build-your-own-super-processor.md new file mode 100644 index 0000000..2ef917a --- /dev/null +++ b/content/blog/2018-openpower-capi-and-opencapi-heterogeneous-computing-design-contest-build-your-own-super-processor.md @@ -0,0 +1,47 @@ +--- +title: "2018 OpenPOWER/CAPI and OpenCAPI Heterogeneous Computing Design Contest - Build Your Own Super Processor" +date: "2018-08-01" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +Organized by IBM China, IPS (Inspur Power Commercial Systems), The OpenPOWER Foundation, the OpenCAPI Consortium and Fudan University Microelectronics College, 2018 CAPI/OpenCAPI heterogeneous computing design contest begins July 6th. + +The objective of the contest is to encourage universities and scientific research institutions to understand the advanced technology of FPGA heterogeneous computing on the OpenPOWER system and prepare applications for technological innovation. The participants will have the opportunity to cooperate with members of the OpenPOWER Foundation, the OpenCAPI Consortium to develop  prototypes on a OpenPOWER platform while receiving technical guidance from from sponsor companies experts. + +The contest is sponsored by OpenPOWER Foundation members Shenzhen Semptian data Limited., Mellanox Technologies, Nallatech (a Molex company) and Xilinx, Inc, + +## Background + +Heterogeneous Computing is a system that uses more than one processor. This multi-core system not only enhances the performance of the processor core, but also incorporates specialized processing capabilities, such as GPU or FPGA, to work on specific tasks. + +In recent years, as the silicon chip approaches physical and economic cost limits, Moore's law is dead.  The rapid development of the Internet, the explosive growth of information and the popularization of AI technology have highly increased the demand for computing power. Heterogeneous computing, the focus is not only limited to the improvement of CPU performance, but to break the bottleneck of data transmission between CPU and peripherals, and to allow more hardware devices to participate in computing, such as using dedicated hardware to be responsible for intensive computing or peripherals management, which can significantly improve the performance of the whole system. There is no doubt that heterogeneous computing is the main direction of improving computing power. + +Participants in the OpenCAPI heterogeneous computing design competition can achieve insight by utilizing and optimizing the most advanced technology available through OpenPOWER architecture. This competition will provide an opportunity to create breakthrough technologies and for enterprise and research workloads.  + +## Contest Rules + +The contest will begin on July 6th with submissions due by Nov. 23rd.   The winner will be announced publically at the OpenPOWER China Summit 2018 in December in Beijing.  Announcement date yet to be determined  + +In preliminaries, participants will submit a solution proposal of a FPGA accelerator based on CAPI/OpenCAPI technology on OpenPOWER systems. The accelerator can serve any workload that requires high computing power or big data transaction bandwidth.  Ten winners of preliminaries will be selected and awarded funds to support them moving on to the final. + +In final, participants will develop a prototype of their solution proposed in real development environment. Sponsors will provide them with OpenPOWER systems + CAPI/OpenCAPI enabled FPGA cards as well as technical expects that will provide coding and debugging skills for the CAPI development framework. + +## Timeline + +
ScheduleTimeContent
Preliminary7/6-8/15Enroll and submit proposal
8/16-8/26Expert Review
8/27Announce Top 10 for Final
Final8/27-11/23Prototype development and submission for final
11/24-11/29Expert Review
(TBD)Final Thesis Oral Defense and Award Ceremony
+ +## Audiences and Enroll + +College students from China universities and research institutes, who are interested in the CAPI/OpenCAPI technology are welcome to join.  They are also welcome to join the OpenPOWER Foundation at the Associate or Academic Level for free ([https://openpowerfoundation.org/membership/levels/](https://openpowerfoundation.org/membership/levels/)) + +Click [More information](https://mp.weixin.qq.com/s?__biz=MjM5MDk3Mjk0MQ==&mid=509982703&idx=1&sn=48ee68fbdd54b1437e78a1d9c2285864&chksm=3d2dba8d0a5a339baf271ed5cedf51d8f29c097e488244623bac2d6f61121c3292fc45de56a4&scene=18&key=1d3ba184c3454c150135581fb2c6d4fd1a55a420799f8) to get to know more of the contest. + +Click [Enroll](http://dsgapp.cn.edst.ibm.com/bps/OpenCAPI/index.html?lectureId=1&project_id=2) for enrollment + +## Messages from Organizers and Sponsors + +
Waiming Wu,
General Manager,
IBM OpenPOWER China
With the ever increasing demand for computing power today, OpenPOWER based on IBM POWER processor and Linux technology has attracted more and more attention from customers, developers and business partners. OpenPOWER systems, with its excellent computing and processing capabilities are ideal for AI, big data and cloud platforms. The OpenCAPI technology used in OpenPOWER systems support heterogeneous computing such that innovation in accelerators could be quickly integrated with POWER processor to provide the next level of computing performance. The new concept of heterogeneous computing based on collaboration between CPU and accelerators heralds a new computing era.
We are pleased to see the announcement and roll out of "The OpenCAPI + OpenPOWER Heterogeneous Computing Contest" for universities and research institutions. The OpenPOWER Foundation & OpenCAPI Consortium, Fudan University and many members of OpenPOWER actively support this activity. This is the best demonstration of the support from academic and corporate community in technological innovation. In IBM we will also do our best to co-organize this event and to contribute developing talents and innovative solutions.
We are also grateful to the technical experts at IBM China System Lab. Under this Contest, they will share the leading technology to the competing teams through in-depth technology seminars, carefully prepared technical documents and the upcoming one-to-one expert support, and of course great technical mentorship.
Hugh Blemings
Executive Director
OpenPOWER Foundation
 
At the OpenPOWER Foundation we're delighted to see our members like Mellanox, Nallatech, Semptian, Xilinx and of course IBM working together in the “OpenCAPI + OpenPOWER Contest”. CAPI/OpenCAPI is a key part of the great Open system that OpenPOWER represents and a leading high speed interconnect for Accelerators and Interconnects alike. Our Members, working with some great universities and research institutions in China will provide both an opportunity for people to learn about CAPI/OpenCAPI and to see solutions to real world problems solved faster using innovative OpenPOWER hardware and software. We're looking forward to seeing what innovative ideas the contestants come up with and, of course, congratulating the winners at the OpenPOWER Summit in Beijing in December.  We wish all involved the very best! 
Yujing Jiang,
Product and Marketing Director,
Inspur Power Commercial Systems Co., Ltd
 
 
Inspur Power Commercial Systems Co., Ltd. is a platinum member of the OpenPOWER Foundation, committed to co-build an open OpenPOWER ecosystem, developing servers based on open Power technology, improving server ecosystems, building a sustainable server business and providing users with advanced, differentiated and diverse computing platforms and solutions. Inspur Power Systems insist on openness and integration for continuous development of heterogeneous computing architecture based on CAPI. CAPI heterogeneous computing breaks the ‘computing walls’, enhances massive parallel data processing capabilities and provides more effective and powerful data resources for image and video, deep learning and database. CAPI heterogeneous computing also provides extremely high data transmission bandwidth, defines a more flexible data storage method, and greatly improves server IO capabilities.
Inspur Power Systems will provide OpenPOWER based datacenter server FP5280G2 as the platform for the contest to verify and test the works. It is the first P9 platform in China, designed for cloud computing, big data, deep learning optimization. It is optimized in performance, extension and storage. The standalone FP5280G2 provides the leading PCIe Gen4 (16Gbps) channel in the industry, and supports CAPI 2.0. We wish this new system will bring an effective support to the contest. And in the future, we can build more systems to enhance heterogeneous computing with more interconnection through CAPI technology between CPU to memory, network, I/O device etc and be widely applied to industry market. 
Yibo Fan,
Associate Professor,
Fudan University Microelectronics College, China
CAPI/OpenCAPI is an unique technology in OpenPOWER system, which provides a superior operating environment for FPGA heterogeneous computing design, especially eliminating the driver development process and providing the most convenient method for rapid chip IP prototype verification and the deployment of heterogeneous systems. Based on CAPI technology, our team launched a CAPI running example of open source H.265 video encoder. Through the technical cooperation in the project, we fully realized the innovative value of CAPI technology for heterogeneous computing. Hopefully by hosting this contest, we can contact more excellent teams and talents who study and master CAPI technology in peer universities, and promote CAPI/OpenCAPI technology further to universities and industries.
 
Qingchun Song,
Mellanox Technologies,
Asia & Pacific Marketing Director
 
 
As a member of OpenPOWER, Mellanox is pleased to be involved in the optimization of OpenCAPI. As a provider of intelligent end-to-end network products, Mellanox has always worked closely with x86 and POWER processor platforms. Mellanox intelligent network products has always been the best choice for the POWER platform.
In June 2018, Summit Supercomputer from Oak Ridge National Laboratory, US, was released at the International Supercomputing Conference in Frankfurt, Germany, It use the POWER CPU plus Mellanox’s InfiniBand network and it is now the fastest supercomputer and artificial intelligence computer in the world.
Mellanox network products currently support 100 GHz per second. Products with a speed of 200 GHz per end will be released into market in the next quarter. Better network speed requires the support of faster internal buses, and high speed OpenCAPI and 200G network products are an excellent match.
I hope that this OpenCAPI optimization contest can efficiently improve the performance of CAPI, and can realize the integration with RDMA technology, which truly realize the matching of internal network buses and external buses and help the next-generation data center. Finally, I wish the contest going smoothly, thank you.
 
Hao Li,
General Manager,
Semptian Data Co., Ltd
 
 
Many thanks to the organizers of the event for inviting Semptian to participate in 2018 OpenCAPI Heterogeneous Computing Design Contest. In recent years, the way of relying solely on CPU to improve computing performance has come to an end. At the same time, various applications, which are emerging rapidly, raise higher demand on computing ability and constantly challenge the performance limit. It has become a consensus in industry that through heterogeneous computing we can break the bottleneck of computing and data transmission.
As a senior corporation which has more than ten years of experience in the field of FPGA developing, Semptian believes that with the help of FPGAs’ advantage of high performance, low power consumption, flexibility and ease of use, combined with the special technical advantage of CAPI technology in OpenPOWER systems, processing specific computing through FPGA + CAPI + CPU is the best way to optimize computing performance, reduce acquisition and operation costs and meet the requirements of applications and power consumptions.
We are very glad to participate in this contest together with other members of the OpenPOWER alliance to expand the development of the alliance’s ecosystem. Hopefully through this contest, we can explore more application scenarios, like artificial intelligence inference, image and video acceleration and gene computing acceleration, to expand the application of heterogeneous computing.
 Fan Kui,
Account Sales Manager Nallatech
 
Nallatech and IBM have worked closely through the OpenPOWER Foundation to enable heterogeneous computing by way of CAPI1.0 & CAPI2.0  based FPGA Accelerators. Nallatech’s 250S FPGA Accelerator supports CAPI1.0 and the 250S+ supports CAPI2.0. Additionally the OpenPOWER Accelerator Workgroups ”CAPI SNAP Acceleration Framework”, is also supported on these cards. CAPI SNAP eases the development of Accelerator Function Units, AFUs, within the FPGA in OpenPOWER systems. As you may very well know, FPGA computing is one of the leading technologies in the development of AI and Deep learning and is one of the most exciting advancements that will affect in how we live our lives.
We are all proud to sponsor such an aspirational and academic event with students of China that will boast amazing innovations in FPGA technology for future generations to come. Thank you for the opportunity of sponsoring your event. We wish you great fortune in this contest, as well as your career in FPGA Acceleration.
 
diff --git a/content/blog/2018-openpower-capi-and-opencapi-heterogeneous-computing-design-contest.md b/content/blog/2018-openpower-capi-and-opencapi-heterogeneous-computing-design-contest.md new file mode 100644 index 0000000..d864f60 --- /dev/null +++ b/content/blog/2018-openpower-capi-and-opencapi-heterogeneous-computing-design-contest.md @@ -0,0 +1,91 @@ +--- +title: "2018 OpenPOWER/CAPI and OpenCAPI Heterogeneous Computing Design Contest" +date: "2018-07-27" +categories: + - "press-releases" + - "blogs" +--- + +\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern"\]\[vc\_column\]\[vc\_column\_text css=".vc\_custom\_1538078932412{margin-bottom: 20px !important;}"\] + +# Build Your Own Super Processor + +\[/vc\_column\_text\]\[vc\_column\_text\]Organized by IBM China, IPS (Inspur Power Commercial Systems), The OpenPOWER Foundation, the OpenCAPI Consortium and Fudan University Microelectronics College, 2018 CAPI/OpenCAPI heterogeneous computing design contest begins July 6th. + +The objective of the contest is to encourage universities and scientific research institutions to understand the advanced technology of FPGA heterogeneous computing on the OpenPOWER system and prepare applications for technological innovation. The participants will have the opportunity to cooperate with members of the OpenPOWER Foundation, the OpenCAPI Consortium to develop  prototypes on a OpenPOWER platform while receiving technical guidance from from sponsor companies experts. + +The contest is sponsored by OpenPOWER Foundation members Shenzhen Semptian data Limited., Mellanox Technologies, Nallatech (a Molex company) and Xilinx, Inc,\[/vc\_column\_text\]\[vc\_column\_text css=".vc\_custom\_1538077233210{margin-top: 20px !important;}"\] + +## Background + +Heterogeneous Computing is a system that uses more than one processor. This multi-core system not only enhances the performance of the processor core, but also incorporates specialized processing capabilities, such as GPU or FPGA, to work on specific tasks. + +In recent years, as the silicon chip approaches physical and economic cost limits, Moore’s law is dead.  The rapid development of the Internet, the explosive growth of information and the popularization of AI technology have highly increased the demand for computing power. Heterogeneous computing, the focus is not only limited to the improvement of CPU performance, but to break the bottleneck of data transmission between CPU and peripherals, and to allow more hardware devices to participate in computing, such as using dedicated hardware to be responsible for intensive computing or peripherals management, which can significantly improve the performance of the whole system. There is no doubt that heterogeneous computing is the main direction of improving computing power. + +Participants in the OpenCAPI heterogeneous computing design competition can achieve insight by utilizing and optimizing the most advanced technology available through OpenPOWER architecture. This competition will provide an opportunity to create breakthrough technologies and for enterprise and research workloads.\[/vc\_column\_text\]\[vc\_column\_text css=".vc\_custom\_1538077241426{margin-top: 20px !important;}"\] + +## Contest Rules + +The contest will begin on July 6th with submissions due by Nov. 23rd.   The winner will be announced publically at the OpenPOWER China Summit 2018 in December in Beijing.  Announcement date yet to be determined + +In preliminaries, participants will submit a solution proposal of a FPGA accelerator based on CAPI/OpenCAPI technology on OpenPOWER systems. The accelerator can serve any workload that requires high computing power or big data transaction bandwidth.  Ten winners of preliminaries will be selected and awarded funds to support them moving on to the final. + +In final, participants will develop a prototype of their solution proposed in real development environment. Sponsors will provide them with OpenPOWER systems + CAPI/OpenCAPI enabled FPGA cards as well as technical expects that will provide coding and debugging skills for the CAPI development framework.\[/vc\_column\_text\]\[vc\_column\_text css=".vc\_custom\_1538077250723{margin-top: 20px !important;}"\] + +## Timeline + +\[/vc\_column\_text\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538077211372{margin-top: 20px !important;background-color: #007aad !important;}"\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\] + +### Schedule + +\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\] + +### Time + +\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\] + +### Content + +\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\] + +#### Preliminary + +\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]7/6-8/15\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Enroll and submit proposal\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]8/16-8/26\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Expert Review\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538076119836{padding-bottom: 1px !important;}"\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]8/27\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Announce Top 10 for Final\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\] + +#### Final + +\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]8/27-11/23\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Prototype development and submission for final\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]11/24-11/29\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Expert Review\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538076138264{padding-bottom: 2px !important;}"\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\](TBD)\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Final Thesis Oral Defense and Award Ceremony\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538077273553{margin-top: 20px !important;}"\]\[vc\_column\_inner\]\[vc\_column\_text\] + +## Audiences and Enroll + +College students from China universities and research institutes, who are interested in the CAPI/OpenCAPI technology are welcome to join.  They are also welcome to join the OpenPOWER Foundation at the Associate or Academic Level for free ([https://openpowerfoundation.org/membership-2/levels/](http://openpowerforum.wpengine.com/membership-2/levels/)) + +Click [More information](https://mp.weixin.qq.com/s?__biz=MjM5MDk3Mjk0MQ==&mid=509982703&idx=1&sn=48ee68fbdd54b1437e78a1d9c2285864&chksm=3d2dba8d0a5a339baf271ed5cedf51d8f29c097e488244623bac2d6f61121c3292fc45de56a4&scene=18&key=1d3ba184c3454c150135581fb2c6d4fd1a55a420799f8) to get to know more of the contest. + +Click [Enroll](http://dsgapp.cn.edst.ibm.com/bps/OpenCAPI/index.html?lectureId=1&project_id=2) for enrollment\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[/vc\_column\]\[/vc\_row\]\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern" css=".vc\_custom\_1538077266597{margin-top: 20px !important;}" z\_index=""\]\[vc\_column\]\[vc\_column\_text\] + +## Messages from Organizers and Sponsors + +\[/vc\_column\_text\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538077331586{margin-top: 16px !important;}"\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5636" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076393779{margin-top: 16px !important;}"\]Waiming Wu, General Manager, IBM OpenPOWER China\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]With the ever increasing demand for computing power today, OpenPOWER based on IBM POWER processor and Linux technology has attracted more and more attention from customers, developers and business partners. OpenPOWER systems, with its excellent computing and processing capabilities are ideal for AI, big data and cloud platforms. The OpenCAPI technology used in OpenPOWER systems support heterogeneous computing such that innovation in accelerators could be quickly integrated with POWER processor to provide the next level of computing performance. The new concept of heterogeneous computing based on collaboration between CPU and accelerators heralds a new computing era. + +We are pleased to see the announcement and roll out of “The OpenCAPI + OpenPOWER Heterogeneous Computing Contest” for universities and research institutions. The OpenPOWER Foundation & OpenCAPI Consortium, Fudan University and many members of OpenPOWER actively support this activity. This is the best demonstration of the support from academic and corporate community in technological innovation. In IBM we will also do our best to co-organize this event and to contribute developing talents and innovative solutions. + +We are also grateful to the technical experts at IBM China System Lab. Under this Contest, they will share the leading technology to the competing teams through in-depth technology seminars, carefully prepared technical documents and the upcoming one-to-one expert support, and of course great technical mentorship.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5637" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076518297{margin-top: 16px !important;}"\]Hugh Blemings, Executive Director, OpenPOWER Foundation\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]At the OpenPOWER Foundation we’re delighted to see our members like Mellanox, Nallatech, Semptian, Xilinx and of course IBM working together in the “OpenCAPI + OpenPOWER Contest”. CAPI/OpenCAPI is a key part of the great Open system that OpenPOWER represents and a leading high speed interconnect for Accelerators and Interconnects alike. + +Our Members, working with some great universities and research institutions in China will provide both an opportunity for people to learn about CAPI/OpenCAPI and to see solutions to real world problems solved faster using innovative OpenPOWER hardware and software. + +We’re looking forward to seeing what innovative ideas the contestants come up with and, of course, congratulating the winners at the OpenPOWER Summit in Beijing in December. We wish all involved the very best!\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5639" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076676628{margin-top: 16px !important;}"\]Yujing Jiang, Product and Marketing Director, Inspur Power Commercial Systems Co., Ltd\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]Inspur Power Commercial Systems Co., Ltd. is a platinum member of the OpenPOWER Foundation, committed to co-build an open OpenPOWER ecosystem, developing servers based on open Power technology, improving server ecosystems, building a sustainable server business and providing users with advanced, differentiated and diverse computing platforms and solutions. Inspur Power Systems insist on openness and integration for continuous development of heterogeneous computing architecture based on CAPI. CAPI heterogeneous computing breaks the ‘computing walls’, enhances massive parallel data processing capabilities and provides more effective and powerful data resources for image and video, deep learning and database. CAPI heterogeneous computing also provides extremely high data transmission bandwidth, defines a more flexible data storage method, and greatly improves server IO capabilities. + +Inspur Power Systems will provide OpenPOWER based datacenter server FP5280G2 as the platform for the contest to verify and test the works. It is the first P9 platform in China, designed for cloud computing, big data, deep learning optimization. It is optimized in performance, extension and storage. The standalone FP5280G2 provides the leading PCIe Gen4 (16Gbps) channel in the industry, and supports CAPI 2.0. We wish this new system will bring an effective support to the contest. And in the future, we can build more systems to enhance heterogeneous computing with more interconnection through CAPI technology between CPU to memory, network, I/O device etc and be widely applied to industry market.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5640" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076774108{margin-top: 16px !important;}"\]Yibo Fan, Associate Professor, Fudan University Microelectronics College, China\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]CAPI/OpenCAPI is an unique technology in OpenPOWER system, which provides a superior operating environment for FPGA heterogeneous computing design, especially eliminating the driver development process and providing the most convenient method for rapid chip IP prototype verification and the deployment of heterogeneous systems. Based on CAPI technology, our team launched a CAPI running example of open source H.265 video encoder. Through the technical cooperation in the project, we fully realized the innovative value of CAPI technology for heterogeneous computing. Hopefully by hosting this contest, we can contact more excellent teams and talents who study and master CAPI technology in peer universities, and promote CAPI/OpenCAPI technology further to universities and industries.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5641" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076852620{margin-top: 16px !important;}"\]Qingchun Song, Mellanox Technologies, Asia & Pacific Marketing Director\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]As a member of OpenPOWER, Mellanox is pleased to be involved in the optimization of OpenCAPI. As a provider of intelligent end-to-end network products, Mellanox has always worked closely with x86 and POWER processor platforms. Mellanox intelligent network products has always been the best choice for the POWER platform. + +In June 2018, Summit Supercomputer from Oak Ridge National Laboratory, US, was released at the International Supercomputing Conference in Frankfurt, Germany, It use the POWER CPU plus Mellanox’s InfiniBand network and it is now the fastest supercomputer and artificial intelligence computer in the world. + +Mellanox network products currently support 100 GHz per second. Products with a speed of 200 GHz per end will be released into market in the next quarter. Better network speed requires the support of faster internal buses, and high speed OpenCAPI and 200G network products are an excellent match. + +I hope that this OpenCAPI optimization contest can efficiently improve the performance of CAPI, and can realize the integration with RDMA technology, which truly realize the matching of internal network buses and external buses and help the next-generation data center. Finally, I wish the contest going smoothly, thank you.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5642" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076930931{margin-top: 16px !important;}"\]Hao Li, General Manager, Semptian Data Co., Ltd\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]Many thanks to the organizers of the event for inviting Semptian to participate in 2018 OpenCAPI Heterogeneous Computing Design Contest. In recent years, the way of relying solely on CPU to improve computing performance has come to an end. At the same time, various applications, which are emerging rapidly, raise higher demand on computing ability and constantly challenge the performance limit. It has become a consensus in industry that through heterogeneous computing we can break the bottleneck of computing and data transmission. + +As a senior corporation which has more than ten years of experience in the field of FPGA developing, Semptian believes that with the help of FPGAs’ advantage of high performance, low power consumption, flexibility and ease of use, combined with the special technical advantage of CAPI technology in OpenPOWER systems, processing specific computing through FPGA + CAPI + CPU is the best way to optimize computing performance, reduce acquisition and operation costs and meet the requirements of applications and power consumptions. + +We are very glad to participate in this contest together with other members of the OpenPOWER alliance to expand the development of the alliance’s ecosystem. Hopefully through this contest, we can explore more application scenarios, like artificial intelligence inference, image and video acceleration and gene computing acceleration, to expand the application of heterogeneous computing.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5643" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538077009696{margin-top: 16px !important;}"\]Fan Kui, Account Sales Manager Nallatech\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]Nallatech and IBM have worked closely through the OpenPOWER Foundation to enable heterogeneous computing by way of CAPI1.0 & CAPI2.0 based FPGA Accelerators. Nallatech’s 250S FPGA Accelerator supports CAPI1.0 and the 250S+ supports CAPI2.0. Additionally the OpenPOWER Accelerator Workgroups ”CAPI SNAP Acceleration Framework”, is also supported on these cards. CAPI SNAP eases the development of Accelerator Function Units, AFUs, within the FPGA in OpenPOWER systems. As you may very well know, FPGA computing is one of the leading technologies in the development of AI and Deep learning and is one of the most exciting advancements that will affect in how we live our lives. + +We are all proud to sponsor such an aspirational and academic event with students of China that will boast amazing innovations in FPGA technology for future generations to come. Thank you for the opportunity of sponsoring your event. We wish you great fortune in this contest, as well as your career in FPGA Acceleration.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[/vc\_column\]\[/vc\_row\]\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern"\]\[vc\_column\]\[vc\_empty\_space\]\[/vc\_column\]\[/vc\_row\] diff --git a/content/blog/2019-opencapi-contest-finalists-announced.md b/content/blog/2019-opencapi-contest-finalists-announced.md new file mode 100644 index 0000000..c58cd06 --- /dev/null +++ b/content/blog/2019-opencapi-contest-finalists-announced.md @@ -0,0 +1,77 @@ +--- +title: "2019 OpenCAPI Contest Finalists Announced" +date: "2020-02-18" +categories: + - "capi-series" + - "blogs" +--- + +From the OpenPOWER Foundation team in China, the ten finalists for this year's OpenCAPI contest have been announced.  The full text of the announcement is shown below, or an [English translation is available if you prefer](https://openpowerfoundation.org/wp-content/uploads/2020/02/2019-OpenCAPI-Contest-Semifinal-list-annoucement-EN.pdf). + +Congratulations to all the finalists from everyone at OpenPOWER! + +**标题:叮咚!****2019 OpenCAPI****异构计算大赛复赛名单到啦,合作方点评高亮快闪!** + +**摘要:科技力量,初露锋芒** + +打破藩篱,引领加速。 + +2019 OpenCAPI异构计算设计大赛初赛告一段落。 + +自今年 9 月 24 日以来 + +共有来自 14 所高校和研究所的 + +21 支队伍报名参加初赛。 + +在初赛激烈的头脑比拼和技术较量后, + +经过专家严格评审, + +现有 10 支代表队冲出重围杀入决赛。 + +科技创新的力量正在初露锋芒! + +他们分别是谁呢? + +复赛入围名单现在揭晓—— + +(按队名排序) + +

团队

学校

方案名称

Baymax

复旦大学

基于OpenCAPI的视频去雾与动态目标识别系统

蔡小帮

深圳大学

面向嵌入式存储LDPC 算法的硬件加速

黄菜黄

复旦大学

视频风格实时迁移的异构系统

华科二队

华中科技大学

基于图像结构相似度的烟雾浓度视频测量

HKUST

香港科技大学,中山大学

基于OpenCAPI平台的Mean Shift Tracking算法实现

KCCT

华中科技大学

基于OpenCAPI 接口的MetaPruning 神经网络剪枝算法加速

SDUers

山东大学

基于OpenCAPI的全景拼接加速设计

shadow is the light

西安交通大学

基于OpenCAPI 的高性能混合加密系统

我们有个响亮的名字

西安交通大学

基于OpenCAPI的变压器绕组变形在线监测系统

迎风踏浪

西安交通大学

基于RLWE的同态密码加速器设计与实现

+ +本次大赛的合作方,包括 Alpha Data Parallel Systems Limited、联捷计算科技(深圳)有限公司(CTAccel)、北京迈络思科技有限公司(Mellanox)、赛灵思电子科技(上海)有限公司(Xilinx)。再次衷心感谢各方合作伙伴的信任和鼎力支持,为我们本次比赛的顺利进行提供了强大的助力!我们精心选取来自合作方的精彩点评,大家来一睹为快吧! + +合作方精彩点评快闪 + +**David Miler, Managing Director, Alpha Data Parallel Systems Limited** + +On behalf of everyone at Alpha Data, I would like to thank you for your participation and winning work in the OpenCAPI contest. As the benefits of the FPGAs and coherent acceleration gain widespread adoption, the knowledge you have gained from this contest places you at the forefront of the next generation of High-Performance Computing. I hope you had a good experience with the Alpha Data hardware and if you have any comments for improving future products, please let us know. I wish you every success in your continued use of this class-leading technology. + +**俞海乐,****CEO****,联捷计算科技(深圳)有限公司(****CTAccel****)** + +非常开心看到这些入围的作品。选题不错,或者具有落地价值,或者比较新颖热门;方案分析、关键性技术和测试方案都挺完整。 + +其中,像视频风格实时迁移、视频去雾和动目标检测、全景拼接等这几个多媒体应用通过异构计算降低延时,能有效解决场景落地问题;像混合加密、同态加密和LDPC算法,聚焦在大数据安全,值得关注和研究;另外还有工控领域变压器绕组在线监测等,比较新颖值得探索。最后,期待各参赛队伍,能顺利完成目标。 + +**宋庆春,亚太区市场部总监,北京迈络思科技有限公司(****Mellanox****)** + +以数据为计算的核心已经成为了应用的趋势,数据在哪里,计算就应该在那里。CPU计算、网络计算和存储计算的三位一体,成为构建高性能数据中心和计算中心的标志。Mellanox作为网络计算的先锋,已经能实现数据在以8Tb/s的速度下从网络中流过的时候完成通信计算,除了为数据在网络中的传输提供了足够的带宽,并解决了数据中心扩展性的难题。 + +但是在服务器内部,PCIe总线的速度已经成为了网络的瓶颈,目前的PCIe3.0 x16只能支持100Gb/s的速度,已经成为200Gb/s网络的瓶颈,OpenPOWER和OpenCAPI的组合可以有效的解决PCIe的带宽瓶颈问题,实现从服务器到网络的数据流量的平衡。2019 OpenPOWER+OpenCAPI竞赛为广大追求极致应用性能的同学们提供了一个很好的平台,在更新的技术台阶上优化应用的性能。 + +**梁晓明****,** **数据中心资深产品行销经理****,** **赛灵思电子科技(上海)有限公司(****Xilinx****)** + +异构计算和硬件加速是近年的热点研究方向,IBM和Xilinx在该领域的深刻研究以及相关产品,给新的计算模式赋能。参赛的各队伍都体现了对新技术和新方向积极探索的精神和很强专业素质。复赛入围作品中,有充分利用OpenCAPI的特性提升性能和易用性的作品,有利用多种异构计算模式灵活的进行算力资源配置的作品,有利用FPGA定制硬件单元突破性能极限的创新。入围作品都结合了实际的应用场景,将前沿技术转换为现实的计算能力,体现了科技转化为生产力的积极推动力。在校学生和科研人员,在研究中发现很多新的课题,用前沿技术和科学的方法提出解决方案,在产业界的支持下快速转换为生产力。创新是Xilinx成长的基础,我们鼓励创新,鼓励优秀的学生为产业界提供新的方法,鼓励根据中国技术标准和实际应用模式进行创新实践。祝愿参赛选手乘风破浪,砥砺前行。 + +再次祝贺入围复赛的参赛团队, + +前路虽漫漫,未来已可期。 + +让我们继续努力, + +在复赛过程中不断创新, + +收获更多的突破和进步! + +成长为引领未来科技创新的中坚力量! diff --git a/content/blog/2019-openpower-opencapi-heterogeneous-computing-design-contest.md b/content/blog/2019-openpower-opencapi-heterogeneous-computing-design-contest.md new file mode 100644 index 0000000..64b7904 --- /dev/null +++ b/content/blog/2019-openpower-opencapi-heterogeneous-computing-design-contest.md @@ -0,0 +1,188 @@ +--- +title: "2019 OpenPOWER + OpenCAPI Heterogeneous Computing Design Contest" +date: "2019-09-24" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "opencapi" + - "opencapi-contest" +--- + +After the success of the 2018 OpenPOWER/CAPI and OpenCAPI Heterogeneous Computing Design Contest, we're excited to see its return in 2019! Groups from research institutions or universities in China are welcome to apply. You can find more information on the contest from our OpenPOWER ecosystem friends in China below. Good luck to all of the participants! + +![](images/KV-English-1024x556.jpg) + +# 2019 OpenPOWER + OpenCAPI异构计算大赛 + +人工智能、物联网、深度学习、人脸识别、无人驾驶…… + +耳熟能详的词汇背后,隐藏着怎样的技术? + +丰富的应用、便捷的生活 + +身处全民数字化时代的你,是否想过 + +是什么在支持着我们? + +在这一切的背后都离不开大量提供强劲计算能力的服务器以及被日益关注的异构计算。 + +在OpenPOWER服务器系统上实现异构计算,利用CAPI接口连接FPGA,设计硬件加速器,可以显著提升系统性能, + +打破计算和数据传输的瓶颈,降低机器的购置和运维成本,实现异构计算的各种可能。 + +回顾2018 OpenPOWER/CAPI + OpenCAPI异构计算大赛, + +来自17所高校的27支代表队伍报名参加比赛! + +经过3个月的实际开发、调试、测试和调优, + +成功开发出基于CAPI/OpenCAPI的设计原型,实践异构计算。 + +他们出色的学习及开发能力让我们相信他们可以逐渐成长为科技创新的中坚力量! + +而今年, + +打破藩篱,引领加速, + +你准备好了吗? + +## 大赛介绍 + +2019 OpenPOWER + OpenCAPI异构计算大赛由OpenPOWER基金会、OpenCAPI联盟主办,IBM中国承办,浪潮商用机器有限公司协办,多家OpenPOWER基金会成员支持,旨在鼓励大学和科研机构了解和实践异构计算,利用OpenPOWER系统上FPGA异构计算的先进技术,开拓视野、积极创新、加速推动科技创新实际应用。 + +  + +参赛者将有机会与OpenPOWER基金会多家会员合作,在先进的OpenPOWER系统平台上实践开发,感受专业领域的开发环境和方法学,并获得企业导师一对一技术指导。获奖学生除了获取奖金之外,还有机会成为IBM的实习生以及工作优先录取的机会! + +  + +另外OpenPOWER基金会也欢迎高校加入成为学术/协会成员(无入会费用,详见:[https://openpowerfoundation.org/membership/levels/](https://openpowerfoundation.org/membership/levels/))。 + +  + +**长按扫码报名及提交您的初赛方案** + +(报名及方案提交开放时间:2019.9.24-2019.10.25) + +  + +## 大赛主体单位 + +【主办单位】 + +OpenPOWER基金会 + +OpenCAPI联盟 + +【承办单位】 + +IBM中国 + +【协办单位】 + +浪潮商用机器有限公司 + +【合作单位】 + +Alpha Data + +联捷科技(CT-Accel) + +北京迈络思科技有限公司(Mellanox) + +赛灵思电子科技(上海)有限公司(Xilinx) + +## 竞赛背景 + +异构计算(Heterogeneous Computing)是指使用一种以上处理器的系统。这种多核心的系统不仅通过增加处理器内核提升性能,还纳入专门的处理能力,例如GPU或FPGA来应对特定的任务。 + +近年来,随着硅芯片逼近物理和经济成本上的极限,摩尔定律已趋近失效。但与之相对的却是,互联网的蓬勃发展、信息量爆炸式增长以及AI技术研究和应用普及,都对计算能力的要求变的更高。而异构计算,将关注点不仅局限在CPU性能的提升,而是打破CPU和外围设备间数据传输的瓶颈,让更多的硬件设备参与计算,如用专用硬件完成密集计算或者外设管理等,从而显著提高系统性能。毫无疑问,异构计算是提高计算力的主流方向。 + +参加OpenCAPI异构计算设计大赛,不仅可以了解当今处理器和系统硬件上最领先的技术,更可以成为把您的聪明才智孵化成某项突破性研究或应用的起点。 + +## 竞赛对象 + +参赛对象为国内任何对大赛有兴趣的大学或研究机构。大赛以学校为单位组织报名,比赛形式为团体赛。具体要求如下: + +- 每支队伍由一名以上学生及一位指导老师组成。指导老师是参赛队所属高校的正式教师,一位老师可以指导多支参赛队 +- 允许一个学校有多只代表队 +- 报名时应具备在校学籍 +- 参赛队员应保证报名信息准确有效 + +## 竞赛奖励 + +初赛入围的10支参赛队将进入复赛。复赛设立一、二、三等奖及鼓励奖。奖金如下(税前金额): + +一等奖   1支团队  奖金人民币2.5万元 + +二等奖   1支团队  奖金人民币2万元 + +三等奖   1支团队  奖金人民币 1.5万元 + +鼓励奖   进入复赛的其他7支队伍 奖金人民币5千元 + +## 赛程和赛制 + +本次竞赛分初赛和复赛两个阶段。初赛采用网上评审方式,复赛采用公开项目答辩的评审方式。 赛程安排如下: + +  + +
赛程时间内容
初赛9/24-10/25初赛方案设计及提交
10/26-11/06初赛专家评审
11/07公布复赛入围的10支团队的名单
复赛11/08-03/06/2020复赛作品开发及提交
03/07/2020-03/14/2020复赛专家评审
03/18/2020复赛答辩及颁奖典礼
+ +  + +  + +**初赛:**参赛队选择可被加速的应用场景,构思系统设计。提出具有创新想法的设计方案。 + +以下几类供参考,并无限制: + +- 解决计算能力瓶颈:大规模并行数据处理能力可以应用于神经网络,图像视频,密码学,网络安全,数据库、以及广泛领域中的数据计算(金融,地质,生物、材料、物理等)。 +- 解决数据传输瓶颈:超高的数据传输带宽可以应用于网络传输,定义更灵活的数据存储方式,并且利用FPGA在数据传输过程中顺便进行数据处理,极大地减轻服务器端的CPU压力。 + +IBM资深专家指导参赛团队结合研究领域,选择应用场景。各团队构思系统设计,进行可行性分析,划分算法流程,软硬件分配,估算带宽,计算密度和效率。在这一阶段,只需以书面报告形式提交方案构想,即提交架构设计和性能预测分析报告。 + + + +**复赛:**参赛队和IBM资深专家一起审阅系统设计,并进入具体开发阶段。 + +  + +- 开发环境为主办单位和合作单位提供,包括OpenPOWER服务器和支持CAPI接口的FPGA板卡搭建的远程环境。主要工作包括软件/硬件开发、调试、记录和分析测试结果。 +- 具体开发过程中,企业导师一对一辅导,协助参赛者把设计实现成原型。复赛作品要求以论文形式提交原型开发报告和分析测试结果。 + +  + +详细的提交内容以及方式,将在后续的竞赛过程中发布,以大赛主办方发布的最新内容为准。 + +## 更多详情 + +**CAPI和OpenCAPI** + +CAPI的全称是Coherent Acceleration Processor Interface,它是允许外部设备(I/O Device)和处理器CPU共享内存的接口技术。以FPGA为例,作为现场可编程门阵列硬件,它有令人惊叹的并行处理能力并完全可以自由定制,但它连在系统中时,仍然是个外部设备。它要参与到异构计算中,和CPU协同工作,不能共享内存怎么行呢?从技术上看,用CAPI接口连接FPGA作为异构计算平台有以下好处: + +- 它是带一致性的加速接口,FPGA可以直接像CPU一样直接访问内存。避免软硬件协同设计中的地址转换操作,大大简化编程思路,进而降低研发开销,缩短开发周期。 +- 主机端程序完全工作在用户态,无须编写PCIE设备驱动程序。 +- FPGA作为I/O设备,和主机通讯的延时更短。 +- 在FPGA处理能力增加的场景下,带宽瓶颈日益凸显。它是业内最领先的PCIE Gen4 (16Gbps) 和OpenCAPI (25Gbps) 通道,妥妥的大带宽! +- OpenCAPI还支持I/O通道的内存扩展,由此探索存储级内存(SCM)对大数据应用的加速。 + +OpenCAPI是独立的标准化组织([www.opencapi.org](http://www.opencapi.org)),它将新一代CAPI技术规范开放出来,致力于推动高速硬件接口设计全面进入带内存一致性的时代,顺应异构计算的潮流,并为之提供了坚实的技术支撑。OpenCAPI首先在Power9发布,搭载Power9和OpenPOWER9服务器,但它的设计特性并没有绑定在Power架构上,完全可以嵌入其它种类的处理器架构。 + +  + +**Power Systems和OpenPOWER** + +  + +在全球众多最大型的集群中,都能看到 Power Systems 高性能计算服务器的身影。Power Enterprise 服务器专为数据设计,可为企业实现终极的弹性、可用性、安全性等性能,被广泛应用于银行、政府、航空、能源等企业的核心业务中,为要求苛刻的工作负载(例如,基因、金融、计算化学、石油和天然气勘探以及高性能数据分析)提供极致。 + +  + +2013年IBM开放Power服务器架构,成立OpenPOWER基金会(https://openpowerfoundation.org/),目前已经有来自34个国家和地区的340多家公司加入,核心会员有IBM、Google、Nvidia、Redhat、Canonical(Ubuntu)、Hitachi、浪潮、Wistron等,共同建设开放的OpenPOWER生态。对比传统Power系统,基于 Linux 的OpenPOWER系统主要由联盟成员设计生产,价格优势明显,同时也能够实现出色的性能和投资回报率,适用于计算密集型和数据密集型应用。这些服务器提供您所需的灵活性,能够快速集成创新技术解决方案,避免被供应商的专有技术所“套牢”,并加速实现业务结果。 + +  + +2018年初,IBM 宣布推出POWER9处理器。全新POWER9芯片为计算密集型人工智能工作负载而设计,是首批嵌入PCI-Express 4.0、新一代NVIDIA NVLink及OpenCAPI的处理器,基于该处理器的系统可以大幅提升Chainer、TensorFlow及Caffe等各大人工智能框架的性能,并加速Kinetica等数据库。提供超越过往所有设计的高速信号总线带宽。如此一来,数据科学家能够实现以更快的速度构建包括科研范畴的深度学习洞察、实时欺诈检测和信用风险分析等范围的应用。POWER9是美国能源部Summit及Sierra超级计算机的核心,这两台超级计算机是当今世界上性能最强的数据密集型超级计算机。 diff --git a/content/blog/_index.md b/content/blog/_index.md new file mode 100644 index 0000000..dba5d1e --- /dev/null +++ b/content/blog/_index.md @@ -0,0 +1,9 @@ +--- +title: Blogs +outputs: + - html + - rss + - json +date: 2022-01-31 +draft: false +--- diff --git a/content/blog/a-better-way-to-compress-big-data.md b/content/blog/a-better-way-to-compress-big-data.md new file mode 100644 index 0000000..048bdd1 --- /dev/null +++ b/content/blog/a-better-way-to-compress-big-data.md @@ -0,0 +1,41 @@ +--- +title: "A Better Way to Compress Big Data" +date: "2018-03-08" +categories: + - "blogs" +tags: + - "openpower" + - "center-for-genome-research-and-biocomputing" + - "oregon-state-university" + - "ibm-power-systems" +--- + +## **Wasting CPU hours on compression** + +The Center for Genome Research and Biocomputing (CGRB) has a large computing resource that supports researchers at Oregon State University by providing processing power, file storage service and more. This computational resource is also used to capture all data generated from the CGRB Core Laboratory facility that processes biological samples used in High Throughput Sequencing (HTS) and other data rich tools. + +Currently, the CGRB Core Lab generates between 4TB and 8TB of data per day which directly lands on the biocomputing resource and is made immediately available to researchers.  Because of this, the CGRB has over 4PB of usable space within our biocomputing facility and continues to add space monthly. Since individual labs must purchase file space needed to accomplish their research, there is always pressure from the lab managers to have users clean up and reduce space allowing for new experiments to be done without the need to purchase more space. This process leads to many users taking CPU time to compress data needed for later use but limiting the lab’s current available space. Since we like to use processing machines for processing data and not just compressing, we needed to find a solution allowing GZIP work to be done without tying up our CPU hours. + +## **More computing, faster** + +To reduce loads on the processing machines and computational time devoted to compressing data, we started considering FPGA cards. + +Specifically, we evaluated offloading compression processes directly onto a peripheral FPGA card. Offloading compression would increase our output and help manage file space usage so groups do not have to purchase more space to start new experiments. + +The new IBM Power Systems POWER8 machines include an interface used to increase speed from CPUs to FPGAs in expansion slots. The Coherent Accelerator Processor Interface (CAPI) connects the expansion bus and allows users to access resources external to the main CPU and memory with up to 238 GB/sec bus speed, thus overcoming a key limitation when working with large data sets. + +Our users do take advantage of the capabilities of the FPGA card, they not only complete their tasks more quickly, but also free up additional CPU hours for other researchers on the cluster. The solution has provided a net benefit in resource utilization and thus has allowed _all_ users to do more computing, faster. + +## **The GZIP coprocessor success story** + +Initial tests showed compressing a small job with a 22-gigabyte file using the CPU would take over 9 minutes of time versus running on the FPGA card the same file would finish in 19 seconds. These tests were changed to massively increase the data being compressed and found that a job that would take 67 hours on the CPU would only take 50 minutes on the FPGA. + +The FPGA GZIP coprocessor has allowed our researchers and staff to quickly recover valuable file space, while speeding up analytics and processing. The coprocessor has its own queue allowing users to submit jobs that can access the gzip card rather than wait to use it interactively. As the coprocessor can only be utilized by a single process at any given time, using the queuing system allows for a mechanism where multiple users can submit jobs to use the card without over-loaded card since the queue waits for one job to finished before beginning the next. + +We have seen as much as a 100-fold increase in the rate at which we can compress and decompress data to and from our storage cluster. These data largely consist of text-based strings (e.g., A, C, T and G nucleotides), meaning they are highly compressible. + +The compression ratio achieved with the gzip card is inferior to that obtained by running gzip directly through the main processor. Our observations indicate that the gzip card yields approximately 80% of the compression obtained using standard methods. This was within an acceptable range for our users since the speed of both compression and decompression is so much greater than those achieved by the standard methods. + +
15 GB .fastq sequence fileCompressedRuntimeCompression ratioCompression? rate (GB/s)
CPU gzip3.1 GB28m 53s5.160.006
CPU gzip -92.9 GB133M 36s5.170.001
Power/CAPI Genwqe_gzip4.2 GB71 seconds3.570.152
+ +**Table-1:** Compression ratio comparison between CPU and FPGA of a 15GB fastq DNA sequence file. diff --git a/content/blog/a-deep-dive-into-a2i-and-a2o.md b/content/blog/a-deep-dive-into-a2i-and-a2o.md new file mode 100644 index 0000000..e97f7eb --- /dev/null +++ b/content/blog/a-deep-dive-into-a2i-and-a2o.md @@ -0,0 +1,58 @@ +--- +title: "A Deep Dive into A2I and A2O" +date: "2020-12-21" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "power" + - "openpower-foundation" + - "open-source" + - "a2i" + - "a2o" + - "open-hardware" + - "developer-community" + - "isa" + - "power-processor-core" +--- + +**By [Abhishek Jadhav,](https://www.linkedin.com/in/abhishek-jadhav-60b30060/) Lead Open Hardware Developer Community (India) and Freelance Tech Journalist** + +After the opening of the [POWER instruction set architecture (ISA)](https://newsroom.ibm.com/2019-08-21-IBM-Demonstrates-Commitment-to-Open-Hardware-Movement) last August, there have been many developments from IBM and its community. + +Some major contributions include OpenPOWER’s A2I and A2O POWER processor core. + +The OpenPOWER Foundation, which is under the umbrella of the Linux Foundation, works on the advocacy of POWER Instruction Set Architecture and its usage in the industry. + +## **What is A2I the core?** + +[A2I core](https://github.com/openpower-cores/a2i/blob/master/rel/doc/A2_BGQ.pdf) was created as a high-frequency four-threaded design, optimized for throughput and targeted for 3 GHz in 45nm technology. It was created to provide high streaming throughput, balancing performance and power. + +_![](images/IB1-1024x680.png)_ + +_“With a strong foundation of the open POWER ISA and now the A2I core, the open source hardware movement is poised to accelerate faster than ever,” said James Kulina, Executive Director, OpenPOWER Foundation._ + +A2I was developed as a processor for customization and embedded use in system-on-chip (SoC) devices, however, it's not limited to that— it can be seen in supercomputers with appropriate accelerators. There is a diverse range of applications associated with the core including streaming, network processing, data analysis. + +We have an [Open Hardware Developer Community](https://www.linkedin.com/groups/12431698/) and contributors across India working on A2I in multiple use cases. where there has been an increasing contribution from the open source community. + +If you want a headstart on A2I core, check out this short [tutorial](https://github.com/openpower-cores/a2i/blob/master/rel/doc/a2_build_video.md) on how to get started. + +## **The launch of A2O** + +A couple of months after the A2I core’s release at [OpenPOWER Summit 2020](https://events.linuxfoundation.org/openpower-summit-north-america/), the OpenPOWER Foundation announced the A2O POWER processor core, an out-of-order follow-up to the A2I core. The A2O processor core is now open-source as a POWER ISA core for embedded use in SoC designs. The A2O offers better single-threaded performance, supports PowerISA 2.07, and has a modular design. + +![](images/IMB2-1024x575.png) + +Potential A2O POWER processor core applications include artificial intelligence, autonomous driving, and secure computing. + +If you want to get started with A2O POWER processor core, watch this short [tutorial](https://github.com/openpower-cores/a2o/blob/master/rel/doc/a2_build_video.md). + +The A2O reference manual is available [here](https://github.com/openpower-cores/a2o/blob/master/rel/doc/A2O_UM.pdf). + +  + +Join the [Open Hardware Developer Community](https://www.linkedin.com/groups/12431698/) to engage in exciting projects on A2I and A2O processor core. + +_Source: All the images were taken from the_ [_Github Repo_](https://github.com/openpower-cores/a2i/tree/master/rel/doc) _and_ [_OpenPOWER Summit North America 2020_](https://openpowerna2020.sched.com/event/eOyb/ibm-open-sources-the-a2o-core-bill-flynn-ibm)_._ diff --git a/content/blog/a-powerful-birthday-gift-to-moores-law.md b/content/blog/a-powerful-birthday-gift-to-moores-law.md new file mode 100644 index 0000000..e5d32f7 --- /dev/null +++ b/content/blog/a-powerful-birthday-gift-to-moores-law.md @@ -0,0 +1,38 @@ +--- +title: "A POWERFUL Birthday Gift to Moore's Law" +date: "2015-04-12" +categories: + - "blogs" +tags: + - "featured" +--- + +By Bradley McCredie + +President, OpenPOWER Foundation + +As we prepare to join the computing world in celebrating the 50th anniversary of Moore’s Law, we can’t help but notice how the aging process has slowed it down. In fact, in a [recent interview](http://spectrum.ieee.org/computing/hardware/gordon-moore-the-man-whose-name-means-progress) with IEEE Spectrum, Moore said, “I guess I see Moore’s Law dying here in the next decade or so.”  But we have not come to bury Moore’s Law.  Quite the contrary, we need the economic advancements that are derived from the scaling Moore’s law describes to survive -- and they will -- if it adapts yet again to changing times. + +It is clear, as the next generation of warehouse scale computing comes of age, sole reliance on the “tick tock” approach to microprocessor development is no longer viable.  As I told the participants at our first OpenPOWER Foundation summit last month in San Jose, the era of relying solely on the generation to generation improvements of the general-purpose processor is over.  The advancement of the general purpose processor is being outpaced by the disruptive and surging demands being placed on today’s infrastructure.  At the same time, the need for the cost/performance advancement and computational growth rates that Moore’s law used to deliver has never been greater.   OpenPOWER is a way to bridge that gap and keep Moore’s Law alive through customized processors, systems, accelerators, and software solutions.  At our San Jose summit, some of our more than 100 Foundation members, spanning 22 countries and six continents, unveiled the first of what we know will be a growing number of OpenPOWER solutions, developed collaboratively, and built upon the non-proprietary IBM POWER architecture. These solutions include: + +Prototype of IBM’s first OpenPOWER high performance computing server on the path to exascale + +- First commercially available OpenPOWER server, the TYAN TN71-BP012 +- First GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950 +- Rackspace open server specification and motherboard mock-up combining OpenPOWER, Open Compute and OpenStack + +Together, we are reimagining the data center, and our open innovation business model is leading historic transformation in our industry. + +The OpenPOWER business model is built upon a foundation of a large ecosystem that drives innovations and shares the profits from those innovations. We are at a point in time where business model innovation is just as important to our industry as technology innovation. + +You don’t have to look any further than OpenPOWER Chairman, Gordon MacKean’s company, Google to see an example of what I mean. While the technology that Google creates and uses is leading in our industry, Google would not be even be a shadow of the company it is today without it’s extremely innovative business model. Google gives away all of its advanced technology for free and monetizes it through other means. + +In fact if you think about it, most all of the fastest growing “new companies” in our industry are built on innovative technology ideas, but the most successful ones are all leveraging business model innovations as well. + +The early successes of the OpenPower approach confirm what we all know – to expedite innovation, we must move beyond a processor and technology-only design ecosystem to an ecosystem that takes into account system bottlenecks, system software, and most importantly, the benefits of an open, collaborative ecosystem. + +This is about how organizations, companies and even countries can address disruptions and technology shifts to create a fundamentally new competitive approach. + +No one company alone can spark the magnitude or diversity of the type of innovation we are going to need for the growing number of hyper-scale data centers. In short, we must collaborate not only to survive…we must collaborate to innovate, differentiate and thrive. + +The OpenPOWER Foundation, our global team of rivals, is modeling what we IBMers like to call “co-opetition” – competing when it is in the best interest of our companies and cooperating with each other when it helps us all.  This combination of breakthrough technologies and unprecedented collaboration is putting us in the forefront of the next great wave of computing innovation.  Which takes us back to Moore’s Law.  In 1965, when Gordon Moore gave us a challenge and a roadmap to the future, there were no smartphones or laptops, and wide-scale enterprise computing was still a dream.  None of those technology breakthroughs would have been possible without the vision of one man who shared it with the world.  OpenPOWER is a bridge we share to a new era. Who knows what breakthroughs it will spawn in our increasingly technology-driven and connected world.  As Moore’s Law has shown us, the future is wide open. diff --git a/content/blog/a2i-power-processor-core-contributed-to-openpower-community-to-advance-open-hardware-collaboration.md b/content/blog/a2i-power-processor-core-contributed-to-openpower-community-to-advance-open-hardware-collaboration.md new file mode 100644 index 0000000..7a47229 --- /dev/null +++ b/content/blog/a2i-power-processor-core-contributed-to-openpower-community-to-advance-open-hardware-collaboration.md @@ -0,0 +1,31 @@ +--- +title: "A2I POWER Processor Core Contributed to OpenPOWER Community to Advance Open Hardware Collaboration" +date: "2020-06-30" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "openpower-foundation" + - "linux-foundation" + - "power-isa" + - "open-source" + - "ibm-a2i" + - "a2i-power-processor" + - "open-source-hardware" + - "open-source-summit" +--- + +At The Linux Foundation Open Source Summit today, the OpenPOWER Foundation announced a major contribution to the open source ecosystem: the IBM A2I POWER processor core design and associated FPGA environment. Following the [opening of the POWER Instruction Set Architecture (ISA)](https://newsroom.ibm.com/2019-08-21-IBM-Demonstrates-Commitment-to-Open-Hardware-Movement) last August, today’s announcement further enables the OpenPOWER Foundation to cultivate an ecosystem of open hardware development. + +![A2I POWER Processor Core](images/A2I-POWER-Processor-Core-1024x583.png) + +The A2I core is an in-order multi-threaded 64-bit POWER ISA core that was developed as a processor for customization and embedded use in system-on-chip (SoC) devices. It was designed to provide high streaming throughput while balancing performance and power. Originally the “wire-speed processor” of the Edge-of-Network SoC called PowerEN, it was later selected as the general purpose processor used in IBM’s BlueGene/Q family of systems, which helped to advance scientific discovery over the last decade. Built for modularity, A2I has the ability to add an Auxiliary Execution Unit (AXU) that is tightly-coupled to the core, enabling many possibilities for special-purpose designs for new markets tackling the challenges of modern workloads. + +“A2I has demonstrated it’s durability over the last decade - it’s a powerful technology with a wide range of capabilities,” said Mendy Furmanek, President, OpenPOWER Foundation and Director, POWER Open Hardware Business Development, IBM. “We’re excited to see what the open source community can do to modernize A2I with today’s open POWER ISA and to adapt the technology to new markets and diverse use cases.” + +“With a strong foundation of the open POWER ISA and now the A2I core, the open source hardware movement is poised to accelerate faster than ever,” said [James Kulina](https://www.linkedin.com/in/james-kulina/), Executive Director, OpenPOWER Foundation. “A2I gives the community a great starting point and further enables developers to take an idea from paper to silicon.” + +The A2I core is available on GitHub and [can be accessed here](https://github.com/openpower-cores/a2i). + +[Register for OpenPOWER Summit North America 2020](https://events.linuxfoundation.org/openpower-summit-north-america/) - a free, virtual experience - to learn more about the A2I core and other developments across the OpenPOWER ecosystem. diff --git a/content/blog/academic-and-industry-experts-share-expertise-during-openpower-and-ai-workshop-at-loyola-institute-of-technology.md b/content/blog/academic-and-industry-experts-share-expertise-during-openpower-and-ai-workshop-at-loyola-institute-of-technology.md new file mode 100644 index 0000000..0e8dc49 --- /dev/null +++ b/content/blog/academic-and-industry-experts-share-expertise-during-openpower-and-ai-workshop-at-loyola-institute-of-technology.md @@ -0,0 +1,27 @@ +--- +title: "Academic and Industry Experts Share Expertise During OpenPOWER and AI Workshop at Loyola Institute of Technology" +date: "2019-03-07" +categories: + - "blogs" +--- + +By [Dr. Sujatha Jamuna Anand](https://www.linkedin.com/in/dr-sujatha-jamuna-anand-4251ba92/), Principal, Loyola Institute of Technology + +![](images/loyola-1-300x150.jpg) + +We recently held the OpenPOWER and AI training workshop in Chennai, India. In addition to faculty and students from [Loyola Institute of Technology](https://litedu.in/), we were joined by academic and industry experts from [IBM](https://www.ibm.com/us-en/?ar=1), [Open Computing Singapore](https://opencomputing.sg/), [Indian Institute of Technology Madras](https://www.iitm.ac.in/), [University of Engineering and Management Kolkata](http://uem.edu.in/uem-kolkata/) and [Object Automation](http://www.object-automation.com/). + +Attendees learned from a number of sessions: + +- [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), IBM shared insight on AI, deep learning inferencing and edge computing. As part of his presentation, he shared several use cases which have been deployed in multiple industries around the world. +- [Jayaram Kizhekke Pakkathillam](https://www.linkedin.com/in/jayaram-kizhekke-pakkathillam-6b2b0963/), IIT Madras gave a brief introduction about unmanned aerial vehicles (UAVs) and the projects he’s worked on as part of IIT Madras’ Aerospace Engineering department. He also discussed how UAVs are effectively used for military and agricultural purposes with examples of different AI systems. +- [Wilson Josup](https://www.linkedin.com/in/wilson-josup-cdcp-ccca-a18ab943/), Open Computing Singapore spoke about the difference between CPUs and GPUs, different types and use cases of GPUs and how OpenPOWER architecture innovations contribute to improved performance from applications. +- [Gayathri Venkataramanan](https://www.linkedin.com/in/gayathri-venkataramanan-0a8831166/), Object Automation and [Prince Barai](https://www.linkedin.com/in/prince-pratik7/), University of Engineering and Management Kolkata delivered various AI use cases with excellent examples. + +Beyond features of AI, several presentations and demonstrations answered how data-driven innovation can be brought to life, and what steps are needed to move AI out of the lab and into mainstream business. + +The OpenPOWER and AI Workshop provided opportunities for young students to initiate their own AI-related projects and collaborations. + +  + +![](images/loyola-2-300x225.jpg) diff --git a/content/blog/accelerated-photodynamic-cancer-therapy-planning-with-fullmonte-on-openpower.md b/content/blog/accelerated-photodynamic-cancer-therapy-planning-with-fullmonte-on-openpower.md new file mode 100644 index 0000000..79532a2 --- /dev/null +++ b/content/blog/accelerated-photodynamic-cancer-therapy-planning-with-fullmonte-on-openpower.md @@ -0,0 +1,26 @@ +--- +title: "Accelerated Photodynamic Cancer Therapy Planning with FullMonte on OpenPOWER" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Abstract + +Photodynamic therapy (PDT) is a minimally-invasive cancer therapy which uses a light-activated drug (photosensitizer/PS). When the photosensitizer absorbs a photon, it excites tissue oxygen into a reactive state which causes very localized cell damage. The light field distribution inside the tissue is therefore one of the critical parameters determining the treatment's safety and efficacy. While FDA-approved and used for superficial indications, PDT has yet to be widely adopted for interstitial use for larger tumours using light delivered by optical fibres due to a lack of simulation and planning optimization software. Because tissue at optical wavelengths has a very high scattering coefficient, extensive Monte Carlo modeling of light transport is required to simulate the light distribution for a given treatment plan. To enable PDT planning, we demonstrate here our “FullMonte” system which uses a CAPI-enabled FPGA to simulate light propagation 4x faster and 67x more power-efficiently than a highly-tuned multicore CPU implementation. With coherent low-latency access to host memory, we are not limited by the size of on-chip memory and are able to transfer results to and from the accelerator rapidly, which will be support our iterative planning flow. Potential advantages of interstitial PDT include less invasiveness and potential post-operative complications than surgery, better damage targeting and confinement than radiation therapy, and no systemic toxicity unlike chemotherapy. While attractive for developed markets for better outcomes, PDT is doubly attractive in emerging regions because it offers the possibility of a single-shot treatment with very low-cost and even portable equipment supported by remotely-provided computing services for planning. + +### Bios + +Jeffrey Cassidy, MASc, PEng is a PhD candidate in Electrical and Computer Engineering at the University of Toronto. Lothar Lilge, PhD is a senior scientist at the Princess Margaret Cancer Centre and a professor of Medical Biophysics at the University of Toronto. Vaughn Betz, PhD is the NSERC-Altera Chair in Programmable Silicon at the University of Toronto. + +### Acknowledgements + +The work is supported by the Canadian Institutes of Health Research, the Canadian Natural Sciences and Engineering Research Council, IBM, Altera, Bluespec, and the Southern Ontario Smart Computing Innovation Platform. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Cassidy-Jeff_OPFS22015_031015_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/accelerating-key-value-stores-kvs-with-fpgas-and-openpower.md b/content/blog/accelerating-key-value-stores-kvs-with-fpgas-and-openpower.md new file mode 100644 index 0000000..619e3a8 --- /dev/null +++ b/content/blog/accelerating-key-value-stores-kvs-with-fpgas-and-openpower.md @@ -0,0 +1,92 @@ +--- +title: "Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER" +date: "2015-11-13" +categories: + - "blogs" +tags: + - "capi" + - "fpga" + - "xilinx" + - "kvs" +--- + +_By Michaela Blott, Principal Engineer, Xilinx Research_ + +First, a bit of background-- I lead a research team in the European headquarters of Xilinx where we look into FPGA-based solutions for data centers. We experiment with the most advanced platforms and tool flows, hence our interest in OpenPOWER. If you haven't worked with an FPGA yet, it’s a fully programmable piece of silicon that allows you to create the perfect hardware circuit for your application thereby achieving best-in-class performance through customized data-flow architectures, as well as substantial power savings.  That means we can investigate how to make data center applications faster, smarter and greener while scrutinizing silicon features and tools flows. Our first application deep-dive was, and still is, key-value stores. + +Key-value stores (KVS) are a fundamental part of today’s data center functionality. Facebook, Twitter, YouTube, flickr and many others use key-value stores to implement a tier of distributed caches for their web content to alleviate access bottlenecks on relational databases that don’t scale well. Up to 30% of data center servers implement key-value stores. But data centers are hitting a wall with performance requirements that drive trade-offs between high DRAM costs (in-memory KVS), bandwidth, and latency. + +We’ve been investigating KVS stores such as memcached since 2013 \[1,2\]. Initially the focus was on pure acceleration and power reduction. Our work demonstrated a substantial 35x performance/power versus the fastest x86 results published at the time. The trick was to completely transform the multithreaded software implementation into a data-flow architecture inside an FPGA as shown below. + +\[caption id="attachment\_2117" align="aligncenter" width="693"\]![Fig 1](images/Fig-1.jpg) Figure 1: 10Gbps memcached with FPGAs\[/caption\] + +However, there were a number of limitations: First, we were not happy with the constrained amount of DRAM that can be attached to an FPGA -- capacity is really important in the KVS context. Secondly, we were concerned about supporting more functionality.   For example, for protocols like Redis with its 200 commands, things can get complicated. Thirdly, we worried about ease-of-use, which is a typical adoption barrier for FPGAs. Finally, things become even more interesting once you add intelligence on top of your data: data analytics, object recognition, encryption, you name it. For this we really need a combination of compute resources that coherently shares memory. That’s exactly why OpenPOWER presented a unique and most timely opportunity to experiment with coherent interfaces. + +**Benchmarking CAPI** + +CAPI, the Coherent Accelerator Processor Interface, enables high performance and simple programming models for attaching FPGAs to POWER8 systems. First, we benchmarked PCI-E and CAPI acceleration against x86 in-memory models to determine the latency of PCI-E and CAPI. The results are explained below: + +\[caption id="attachment\_2118" align="aligncenter" width="619"\]![Figure2_new](images/Figure2_new.jpg) Figure 2: System level latency OpenPower with FPGA vs x86\[/caption\] + +**Latency** + +PCI-E DMA Engines and CAPI perform significantly better than typical x86 implementations. At 1.45 microseconds, CAPI operations are so low-latency that overall system-level impact is next to negligible.  Typical x86 installations service memcached requests within a range of 100s to 1000s of microseconds. Our OpenPower CAPI installation services the same requests in 3 to 5 microseconds, as illustrated in Figure 2 (which uses a logarithmic scale). + +\[caption id="attachment\_2119" align="aligncenter" width="698"\]![Figure3_new](images/Figure3_new.jpg) Figure 3: PCIe vs CAPI Bandwidth over transfer sizes\[/caption\] + +**Bandwidth** + +Figure 3 shows measured bandwidth vs. transfer size for CAPI in comparison to a generic PCIe DMA. The numbers shown are actual measurements \[4\] and are representative in that PCIe performance is typically very low for small transfer sizes and next to optimal for large transfer sizes. So for small granular access, CAPI far outperforms PCIe. Because of this, CAPI provides a perfect fit for the small transfer sizes as required in the KVS scenario. For implementing object storage in host memory, we are really only interested in using CAPI in the range of transfer sizes of 128 bytes to 1kbyte. Smaller objects can be easily accommodated in FPGA-attached DRAM; larger objects can be accommodated in Flash (see also our HotStorage 2015 publication \[3\]). + +**FPGA Design** + +Given the promising benchmarking results, we proceeded to integrate the host memory via CAPI. For this we created a hybrid memory controller which routes and merges requests and responses between the various storage types, handles reordering, and provides a gearbox for varying access speeds and bandwidths. With these simple changes, we now have up to 1 Terabyte of coherent memory space at our disposal without loss of performance! Figure 4 shows the full implementation inside the FPGA. + +\[caption id="attachment\_2120" align="aligncenter" width="748"\]![Figure4](images/Figure4.jpg) Figure 4: Memcached Implementation with OpenPower and FPGA\[/caption\] + +**Ease of Use** + +Our next biggest concern was ease of use for both FPGA design entry as well as with respect to host–accelerator integration. In regards to the latter, OpenPOWER exceeded our expectations. Using the provided API from IBM (libcxl) as well as the POWER Service Layer IP that resides within the FPGA (PSL), we completed system integration within a matter of weeks while saving huge amounts of code: 800 lines of code to be precise for x86 driver, memory allocation, and pinning, and 13.5k fewer instructions executed! + +Regarding the FPGA design, it was of utmost importance to ensure that it is possible to create a fully functional and high-performing design through a high-level design flow (C/C++ at minimum), in the first instance using Xilinx’s high-level synthesis tool, Vivado HLS. The good news was that we fully succeeded in doing this and the resulting application design was fully described in C/C++, achieving a 60% reduction in lines of code (11359 RTL vs 4069 HLS lines). The surprising bonus was that we even got a resource reduction – for FPGA-savvy readers: 22% in LUTs & 30% in FFs. And let me add, just in case you are wondering, the RTL designers were at the top of their class! + +The only low-level aspects left in the design flow are the basic infrastructure IP, such as memory controllers and network interfaces, which are still manually integrated. In the future, this will be fully automated through SDAccel. In other words, a full development environment that requires no further RTL development is on the horizon. + +**Results** + +\[caption id="attachment\_2121" align="aligncenter" width="693"\]![Figure5](images/Figure5.jpg) Figure 5: Demonstration at the OpenPower Summit 2015\[/caption\] + +We demonstrated the first operational prototype of this design at Impact in April 2014 and then demonstrated the fully operational demo vehicle (shown in Figure 5) including fully CAPI-enabled access to host memory at the OpenPOWER Summit in March 2015. The work is now fully integrated with [IBM’s SuperVessel](http://www.ptopenlab.com). In the live demonstration, the OpenPOWER system outperforms an x86 implementation by 20x (see Figure 6)! + +\[caption id="attachment\_2122" align="aligncenter" width="625"\]![kvs_comparison](images/kvs_comparison-1024x577.jpg) Figure 6: Screenshot of network tester showing response traffic rates from OpenPower with FPGA acceleration versus x86 software solution\[/caption\] + +**Summary** + +The Xilinx demo architecture enables key-value stores that can operate at **60Gbps with 2TB value-store capacity** that fits within a 2U OpenPOWER Server. The architecture can be easily extended. We are actively investigating using Flash to expand value storage even further for large granular access. But most of all, we are really excited about the opportunities for this architecture when combining this basic functionality with new capabilities such as encryption, compression, data analytics, and face & object recognition! + +**Getting Started** + +- Visit [Xilinx at SC15](http://www.xilinx.com/about/events/sc15.html)! November 15-19, Austin, TX. +- Learn more about [POWER8 CAPI](http://www-304.ibm.com/webapp/set2/sas/f/capi/home.html) +- Purchase a CAPI developer kit from [Nallatech](http://www.nallatech.com/solutions/openpower-capi-developer-kit-for-power-8/) or [AlphaData](http://www.alpha-data.com/dcp/capi.php) +- License this technology through [Xilinx](http://www.xilinx.com/) today.  We work directly with customers and data centers to scale performance/watt in existing deployments with hardware based KVS accelerators. If you are interested in this technology, please contact us. + +\================================================================================== + +**References** + +_\[1\] M.Blott, K.Vissers, K.Karras, L.Liu, Z. Istvan, G.Alonso: HotCloud 2013; Achieving 10Gbps line-rate key-value stores with FPGAs_ + +_\[2\] M.Blott, K. Vissers: HotChips’14; Dataflow Architectures for 10Gbps Line-rate Key-value-Stores._ + +_\[3\] M.Blott, K.Vissers, L.Liu: HotStorage 2015; Scaling out to a Single-Node 80Gbps Memcached Server with 40Terabytes of Memory_ + +_\[4\] PCIe bandwidth reference numbers were kindly provided by Noa Zilberman & Andrew Moore from Cambridge University_ + +* * * + +**_About Michaela Blott_** + +![Michaela Blott](images/Michaela-Blott.png) + +Michaela Blott graduated from the University of Kaiserslautern in Germany. She worked in both research institutions (ETH and Bell Labs) as well as development organizations and was deeply involved in large scale international collaborations such as NetFPGA-10G. Today, she works as a principal engineer at the Xilinx labs in Dublin heading a team of international researchers, investigating reconfigurable computing for data centers and other new application domains. Her expertise includes data centers, high-speed networking, emerging memory technologies and distributed computing systems, with an emphasis on building complete implementations. diff --git a/content/blog/accelerator-opportunities-with-openpower.md b/content/blog/accelerator-opportunities-with-openpower.md new file mode 100644 index 0000000..eabf59c --- /dev/null +++ b/content/blog/accelerator-opportunities-with-openpower.md @@ -0,0 +1,31 @@ +--- +title: "Accelerator Opportunities with OpenPower" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Abstract + +The OpenPower architecture provides unique capabilities which will enable highly effective and differentiated acceleration solutions.   The OpenPower Accelerator Workgroup is chartered to develop both hardware the software standards which provide vendors the ability to develop these solutions.  The presentation will cover an overview of the benefits of the OpenPower architecture for acceleration solutions.   We will provide an overview of the Accelerator Workgroups plans and standards roadmap.   We will give an overview of the OpenPower CAPI development kit.   We will also walk through an example of a CAPI attached acceleration solution. + +### Presentation agenda + +- Overview of opportunity for OpenPower acceleration solutions +- OpenPower Accelerator workgroup charter and standards roadmap +- OpenPower CAPI Development Kit +- CAPI attached acceleration solution example + +### Bio + +[Nick Finamore](https://www.linkedin.com/profile/view?id=4723882&authType=NAME_SEARCH&authToken=2y98&locale=en_US&srchid=32272301421437850712&srchindex=3&srchtotal=8&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421437850712%2CVSRPtargetId%3A4723882%2CVSRPcmpt%3Aprimary), Altera Corporation Product Marketing Manager for Software Development Tools  Chairperson,  OpenPower Foundation Accelerator Workgroup + +For the past 3 years Nick has been leading Altera’s computing acceleration initiative and the marketing  of Altera’s SDK for OpenCL.  Previously Nick was in several leadership positions at early stage computing and networking technology companies including Netronome, Ember(SiLabs) and Calxeda.   Nick also had an 18 year career at Intel where he held several management positioning including general manager of the network processor division. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Finamore-Nick_OPFS2015_Altera_031215_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/acrc-openpower.md b/content/blog/acrc-openpower.md new file mode 100644 index 0000000..2aec3fb --- /dev/null +++ b/content/blog/acrc-openpower.md @@ -0,0 +1,30 @@ +--- +title: "Singapore's A*CRC Joins the OpenPOWER Foundation to Accelerate HPC Research" +date: "2016-03-17" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Ganesan Narayanasamy, Senior Manager, IBM Systems_ + +Singapore’s Agency for Science, Technology and Research (A\*STAR) is the largest government funded research organization in Singapore, with over 5,300 personnel in 14 research institutes across the country. + +[![A STAR Computational Resource Centre](images/A-STAR-Computational-Resource-Centre.png)](https://openpowerfoundation.org/wp-content/uploads/2016/03/A-STAR-Computational-Resource-Centre.png) + +A\*STAR Computational Resource Centre (A\*CRC) provides high performance computational (HPC) resources to the entire A\*STAR research community. Currently A\*CRC supports HPC needs of an 800 member user community and manages several high-end computers, including an IBM 822LC system with NVIDIA K80 GPU Cards and Mellanox EDR switch to port and optimize the HPC applications. It is also responsible for very rapidly growing data storage resources. + +A\*CRC will work with IBM and the OpenPOWER Foundation to hasten its path to develop applications on OpenPOWER Systems leveraging the Foundation’s ecosystem of technology. + +https://youtu.be/F07fJHhQdu4 + +Experts it A\*CRC will explore the range of scientific applications that leverage the Power architecture as well as NVIDIA’s GPU and Mellanox’s 100 GB/sec Infiniband switches. The switches are designed to work with IBM's Coherent Application Processor Interface (CAPI), an OpenPOWER technology that allows attached accelerators to connect with the Power chip at a deep level. + +A\*CRC also will work with the OpenPOWER Foundation on evolving programming models such as OpenMP, the open multiprocessing API designed to support multi-platform shared memory. + +“We need to anticipate the rise of new high performance computing architectures that bring us closer to exascale and prepare our communities,” A\*CRC CEO Marek Michalewicz noted in a statement. + +[![SCF2016-logo_final_retina2](images/SCF2016-logo_final_retina2-300x129.png)](https://openpowerfoundation.org/wp-content/uploads/2016/03/SCF2016-logo_final_retina2.png) + +This week, A\*STAR is hosting the [Singapore Supercomputing Frontiers Conference](http://supercomputingfrontiers.com/2016/). To learn more about their work, take part in our OpenPOWER workshop on March 18 and stay tuned for additional updates. diff --git a/content/blog/advancing-human-brain-project-openpower.md b/content/blog/advancing-human-brain-project-openpower.md new file mode 100644 index 0000000..e361968 --- /dev/null +++ b/content/blog/advancing-human-brain-project-openpower.md @@ -0,0 +1,24 @@ +--- +title: "Advancing the Human Brain Project with OpenPOWER" +date: "2016-10-27" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Dr. Dirk Pleiter, Research Group Leader, Jülich Supercomputing Centre_ + +![Human Brain Project and OpenPOWER members NVIDIA, IBM](images/HBP_Primary_RGB-1-1024x698.png) + +The [Human Brain Project](https://www.humanbrainproject.eu/) (HBP), a flagship project [funded by the European Commission](http://ec.europa.eu/research/fp7/index_en.cfm), has set itself an ambitious goal: Unifying our understanding of the human brain. To achieve it, researchers need a High-Performance Analytics and Compute Platform comprised of supercomputers with features that are currently not available, but OpenPOWER is working to make them a reality. + +Through a Pre-Commercial Procurement (PCP) the HBP initiated the necessary R&D, and turned to the OpenPOWER Foundation for help. During three consecutive phases, a consortium of [IBM and NVIDIA has successfully been awarded with R&D contracts](http://www.fz-juelich.de/SharedDocs/Pressemitteilungen/UK/EN/2016/16-09-27hbp_pilotsysteme.html). As part of this effort, a pilot system called [JURON](https://hbp-hpc-platform.fz-juelich.de/?page_id=1073) (a combination of Jülich and neuron) has been installed at Jülich Supercomputing Centre (JSC). It is based on the [new IBM S822LC for HPC servers](https://www.ibm.com/blogs/systems/ibm-nvidia-present-nvlink-server-youve-waiting/), each equipped with two POWER8 processors and four NVIDIA P100 GPUs. + +Marcel Huysegoms, a scientist from [the Institute for Neuroscience and Medicine](http://www.fz-juelich.de/inm/EN/Home/home_node.html), with support from the JSC could demonstrate soon after deployment the usability of the system for his brain image registration application. Exploiting the processing capabilities of the GPUs without further tuning, could achieve a significant speed-up compared to the currently used production system based on Haswell x86 processors and K80 GPUs. + +Not only do the improved compute capabilities matter for brain research, but also by designing and implementing the Global Sharing Layer (GSL), the non-volatile memory cards mounted on all nodes became a byte addressable, globally accessible memory resource. Using JURON it could be shown that data can be read at a rate that is only limited by network performance. These new technologies will open new opportunities for enabling data-intensive workflows in brain research, including data visualization. + +The pilot system will be the first system based on POWER processors where graphics support is being brought to the HPC node. In combination with the GSL it will be possible to visualize large data volumes that are, as an example, generated by brain model simulations. Flexible allocation of resources to compute applications, data analytics and visualization pipelines will be facilitated through another new component, namely the dynamic resource management. It allows for suspension of execution of parallel jobs for a later restart with a different number of processes. + +JURON clearly demonstrates the potential of a technology ecosystem settled around a processor architecture with interfaces that facilitate efficient integration of various devices for efficient processing, moving and storing of data. In other words, it demonstrates the collaborative potential of OpenPOWER. diff --git a/content/blog/advancing-the-openpower-vision.md b/content/blog/advancing-the-openpower-vision.md new file mode 100644 index 0000000..80225cb --- /dev/null +++ b/content/blog/advancing-the-openpower-vision.md @@ -0,0 +1,22 @@ +--- +title: "Advancing the OpenPOWER vision" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Abstract + +It’s been nearly a year since the public launch of OpenPower and the community of technology leaders that make up our community have made significant progress towards our original goals. While growth of the membership is a critical factor, our success will come from the technology provided through the ‘open model’ and the ‘value’ solutions that are enabled by leveraging that technology. Please join us as we highlight the key components that our member community have contributed to that ‘open model’ and spotlight some examples of high value solutions enabled through members leveraging our combined capabilities and strengths. + +### Speaker + +[Gordon MacKean](https://www.linkedin.com/profile/view?id=1547172&authType=NAME_SEARCH&authToken=PNgl&locale=en_US&trk=tyah2&trkInfo=tarId%3A1421437126543%2Ctas%3AGordon%20McKean%2Cidx%3A1-1-1) is a Sr. Director with the Hardware Platforms team at Google. He leads the team responsible for the design and development of the server and storage products used to power Google data centers. Prior to Google, Gordon held management and design roles at several networking companies including Matisse Networks, Extreme Networks, and Nortel Networks. Gordon is a founder of OpenPOWER Foundation and serves as the Chairman of the Board of Directors. Gordon holds a Bachelors degree in Electrical Engineering from Carleton University. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/MacKean-McCredie_OPFS2015_KEYNOTE15-03-16-gm5.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/ai-improve-rural-healthcare.md b/content/blog/ai-improve-rural-healthcare.md new file mode 100644 index 0000000..1efa249 --- /dev/null +++ b/content/blog/ai-improve-rural-healthcare.md @@ -0,0 +1,24 @@ +--- +title: "AI to Improve Rural Healthcare Discussed at OpenPOWER Summit Europe" +date: "2018-10-18" +categories: + - "blogs" +tags: + - "featured" +--- + +By Dr. Praveen Kumar B.A. M.B.B.S, M.D., professor, Department of Community Medicine, PES Institute of Medical Sciences and Research + +It was great to attend the [OpenPOWER Summit Europe](https://openpowerfoundation.org/summit-2018-10-eu/) in Amsterdam earlier this month. As an academia member from a medical background, it was the first completely technical forum I had attended at an international level. + +The [PES Institute of Medical Sciences](http://pesimsr.pes.edu/), India has been working with IBM and the OpenPOWER community recently on developing AI solutions for patient care in our rural facility. We are a tertiary care teaching institute catering to a rural population of around one million. I attended the OpenPOWER Summit Europe to discuss the need and opportunity for deploying AI solutions in our work. + + + +**[Artificial Intelligence in Healthcare at OpenPOWER Summit Europe](//www.slideshare.net/OpenPOWERorg/artificial-intelligence-in-healthcare-at-openpower-summit-europe "Artificial Intelligence in Healthcare at OpenPOWER Summit Europe")** from **[OpenPOWERorg](https://www.slideshare.net/OpenPOWERorg)** + +AI in health care was a featured theme throughout the OpenPOWER Summit Europe. Professor Florin Manaila demonstrated solutions he has worked on for breast cancer diagnosis and grading using image processing. And Professor Antonio Liotta spoke about machine learning and AI-related research in his lab. + +The AI4Good Hackathon invited researchers from across the world to find solutions for health challenges – particularly in cancer care. I was glad to see students from India and Europe participating. + +I look forward to networking with other academic and industry teams to work on further developing model training and implementation. Through collaboration, institutions can partner together to secure funding and innovate toward a brighter future. diff --git a/content/blog/algo-logic-systems-launches-capi-enabled-order-book-running-on-ibm-power8-server.md b/content/blog/algo-logic-systems-launches-capi-enabled-order-book-running-on-ibm-power8-server.md new file mode 100644 index 0000000..c17dad5 --- /dev/null +++ b/content/blog/algo-logic-systems-launches-capi-enabled-order-book-running-on-ibm-power8-server.md @@ -0,0 +1,39 @@ +--- +title: "Algo-Logic Systems launches CAPI enabled Order Book running on IBM® POWER8™ server" +date: "2015-03-18" +categories: + - "press-releases" + - "blogs" +--- + +SANTA CLARA, Calif., March 16, 2015 /PRNewswire/ -- Algo-Logic Systems, a recognized leader in providing hardware-accelerated, deterministic, ultra-low-latency products, systems and solutions for accelerated finance, packet processing and embedded system industries, announced today availability of their new Coherent Accelerator Processor Interface (CAPI) enabled Full Order Book solution on IBM® POWER8™ systems. The CAPI enabled Order Book performs all feed processing and book building in logic inside a single Stratix V FPGA on the Nallatech P385 card. The system enables software to directly receive order book snapshots in the coherent shared memory with the least possible latency. The low latency Order Book is designed using the on-chip memory for customer book sizes with many thousands of open orders, up to 24 symbols, and reporting of six L-2 book levels. For use cases where millions of open orders and full market depth need to be tracked, the scalable CAPI enabled Order Book is still implemented with a single FPGA but stores data in off-chip memory. + +Photo - [http://photos.prnewswire.com/prnh/20150314/181760](http://photos.prnewswire.com/prnh/20150314/181760) + +The CAPI Order Book building process includes (i) receiving parsed market data feed messages, (ii) building and maintaining L-3 order-level replica of the exchange's displayable book, (iii) building L-2 books for each symbol with the market depth and weight summary of all open orders, (iv) reporting locally generated copy of the top-of-book with configurable amount of market depth (L-2 snapshots) as well as the last trade information when orders execute. By using the IBM POWER8 server, algorithms can run on the highest number of cores and seamlessly integrate with the Order Book hardware accelerator by means of the coherent shared memory. Through simple memory-mapped IO (MMIO) address space, all the parameters are configurable and statistics can be easily read from software. Algo-Logic's CAPI enabled Full Order Book achieves deterministic, ultra-low-latency without jitter regardless of the number of tracked symbols at data rates of up to 10 Gbps. Key features include: + +- Accelerated Function Unit (AFU) is implemented on FPGA under CAPI +- Full Order Book with a L-2 default size of 6 price-levels per symbol, fully scalable to larger sizes +- By default L-2 snapshots are generated for each symbol + - The number of symbols in use and their respective snapshots are user configurable + - L-2 snapshot generation frequency is also user configurable on an event basis or at a customizable interval +- Full Order Book with a L-2 default size of 6 price-levels per symbol, fully scalable to larger sizes +- By default L-2 snapshots are generated for each symbol + - The number of symbols in use and their respective snapshots are user configurable + - L-2 snapshot generation frequency is also user configurable on an event basis or at a customizable interval +- Full Order Book output logic seamlessly connects to customer's proprietary algorithmic trading strategies +- Trader has access to the latest market depth (L-2 snapshots) in coherent shared memory +- L-3 Book updates complete with processing latency of less than 230 nanoseconds +- L-2 Book updates complete with processing latency of less than 120 nanoseconds + +The CAPI Order Book can be seamlessly integrated with other components of Algo-Logic's Low Latency Application Library, including pre-built protocol parsing libraries, market data filters, and TCP/IP endpoints to deploy complete tick-to-trade applications within a single Stratix V FPGA platform. + +Algo-Logic's world-class hardware accelerated systems and solutions are used by banks, trading firms, market makers, hedge-funds, and financial institutions to accelerate their network processing for protocol parsing, symbol filtering, Risk-Checks (sec 15c 3-5), order book processing, order injection, proprietary trading strategies, high frequency trading, financial surveillance systems, and algorithmic trading. + +Availability: The CAPI Order Book solution is currently shipping, for additional information please contact [Info@algo-logic.com](mailto:Info@algo-logic.com) or visit our website at: [www.algo-logic.com](http://www.algo-logic.com/) + +About Algo-Logic Systems Algo-Logic Systems, Inc., is the recognized leader and developer of Gateware Defined Networking® (GDN) for Field Programmable Gate Array (FPGA) devices. Algo-Logic IP-Cores are used for accelerated finance, packet processing and classification in datacenters, and real-time data acquisition and processing in embedded hardware systems. The company has extensive experience in building complete network processing system solutions in FPGA logic. + +To view the original version on PR Newswire, visit:[http://www.prnewswire.com/news-releases/algo-logic-systems-launches-capi-enabled-order-book-running-on-ibm-power8-server-300050631.html](http://www.prnewswire.com/news-releases/algo-logic-systems-launches-capi-enabled-order-book-running-on-ibm-power8-server-300050631.html) + +SOURCE Algo-Logic Systems diff --git a/content/blog/altera-brings-fpga-based-acceleration-to-ibm-power-systems-and-announces-support-for-openpower-consortium.md b/content/blog/altera-brings-fpga-based-acceleration-to-ibm-power-systems-and-announces-support-for-openpower-consortium.md new file mode 100644 index 0000000..3e9aa96 --- /dev/null +++ b/content/blog/altera-brings-fpga-based-acceleration-to-ibm-power-systems-and-announces-support-for-openpower-consortium.md @@ -0,0 +1,9 @@ +--- +title: "Altera Brings FPGA-based Acceleration to IBM Power Systems and Announces Support for OpenPOWER Consortium" +date: "2014-11-18" +categories: + - "press-releases" + - "blogs" +--- + +San Jose, Calif., November 18, 2013—Altera Corporation (NASDAQ: ALTR) today announced the latest release of the Altera SDK for OpenCL supports IBM Power Systems servers as an OpenCL system host. Customers are now able to develop OpenCL code that targets IBM Power Systems CPUs and accelerator boards with Altera FPGAs as a high-performance compute solution. FPGA accelerated systems can achieve a 5-20X performance boost over standard CPU based servers. Altera will showcase the performance advantage of using FPGAs to accelerate IBM Power Systems, as well as other OpenCL-focused demonstrations, this week at SuperComputing 2013 in booth #4332. diff --git a/content/blog/altera-joins-ibm-openpower-foundation-to-enable-the-development-of-next-generation-data-centers.md b/content/blog/altera-joins-ibm-openpower-foundation-to-enable-the-development-of-next-generation-data-centers.md new file mode 100644 index 0000000..d7ad387 --- /dev/null +++ b/content/blog/altera-joins-ibm-openpower-foundation-to-enable-the-development-of-next-generation-data-centers.md @@ -0,0 +1,9 @@ +--- +title: "Altera Joins IBM OpenPOWER Foundation to Enable the Development of Next-Generation Data Centers" +date: "2014-03-24" +categories: + - "press-releases" + - "blogs" +--- + +San Jose, Calif., March 24, 2014– Altera Corporation (Nasdaq: ALTR) today announced it joined the IBM OpenPOWER Foundation, an open development alliance based on IBM's POWER microprocessor architecture. Altera will collaborate with IBM and other OpenPOWER Foundation members to develop high-performance compute solutions that integrate IBM POWER CPUs with Altera’s FPGA-based acceleration technologies for use in next-generation data centers. diff --git a/content/blog/american-megatrends-custom-built-server-management-platform-for-openpower.md b/content/blog/american-megatrends-custom-built-server-management-platform-for-openpower.md new file mode 100644 index 0000000..daefe46 --- /dev/null +++ b/content/blog/american-megatrends-custom-built-server-management-platform-for-openpower.md @@ -0,0 +1,35 @@ +--- +title: "American Megatrends Custom Built Server Management Platform for OpenPOWER" +date: "2015-11-13" +categories: + - "blogs" +tags: + - "power8" + - "ami" +--- + +**_By Christine M. Follett, Marketing Communications Manager, American Megatrends, Inc._** + +As one of the newest members of the OpenPOWER Foundation, we at American Megatrends, Inc. (AMI) are very excited get started and contribute to the mission and goals of the Foundation. Our President and CEO, Subramonian Shankar, who founded the company thirty years ago, shares his thoughts on joining the Foundation: + +“Participating in OpenPOWER with partners such as IBM and TYAN will allow AMI to more rapidly engage as our market continues to grow, and will ensure our customers receive the industry’s most reliable and feature-rich platform management technologies. As a market leader for core server firmware and management technologies, we are eager to assist industry leaders in enabling next generation data centers as they rethink their approach to systems design.” + +![MegaRAC_SPX_logo_1500x1200](images/MegaRAC_SPX_logo_1500x1200-300x240.png) The primary technology that AMI is currently focusing on relative to its participation in the OpenPOWER Foundation is a full-featured server management solution called MegaRAC® SPX, in particular a custom version of this product developed for POWER8-based platforms. MegaRAC SPX for POWER8 is a powerful development framework for server management solutions composed of firmware and software components based on industry standards like IPMI 2.0, SMASH, Serial over LAN (SOL). It offers key serviceability features including remote presence, CIM profiles and advanced automation. + +MegaRAC SPX for POWER8 also features a high level of modularity, with the ability to easily configure and build firmware images by selecting features through an intuitive graphical development tool chain. These features are available in independently maintained packages for superior manageability of the firmware stack. You can learn more about MegaRAC SPX at our website dedicated to AMI remote management technology [here](http://www.megarac.com/live/embedded/megarac-spx/). + +![AMI dashboard](images/AMI-dashboard.png) + +Foundation founding member TYAN has been an early adopter of MegaRAC SPX for POWER8, adopting it for one of their recent platforms. According to Albert Mu, Vice President of MITAC Computing Technology Corporation’s TYAN Business Unit, “AMI has been a critical partner in the development of our POWER8-based platform, the TN71-BP012, which is based on the POWER8 Architecture and provides tremendous memory capacity as well as outstanding performance that fits in datacenter, Big Data or HPC environments. We are excited that AMI has strengthened its commitment to the POWER8 ecosystem by joining the OpenPOWER Foundation.” + +Founded in 1985, AMI is known worldwide for its AMIBIOS® firmware. From our start as the industry’s original independent BIOS vendor, we have evolved to become a key supplier of state-of-the-art hardware, software and utilities to top-tier manufacturers of desktop, server, mobile and embedded computing systems. + +With AMI’s extensive product lines, we are uniquely positioned to provide all of the fundamental components to help OpenPOWER innovate across the system stack, providing performance, manageability, and availability for today's modern data centers. AMI prides itself on its unique position as the only company in the industry that offers products and services based on all of these core technologies. + +AMI is extremely proud to join our fellow OpenPOWER member organizations working collaboratively to build advanced server, networking, storage and acceleration technology as well as industry-leading open source software. Together we can deliver more choice, control and flexibility to developers of next-generation hyperscale and cloud data centers. + +* * * + +**_About Christine M. Follett_** + +_![Christine Follett](images/Christine-Follett.png)Christine M. Follett is Marketing Communications Manager for American Megatrends, Inc. (AMI). Together with the global sales and marketing team of AMI, which spans seven countries, she works to expand brand awareness and market share for the company’s diverse line of OEM, B2B/Channel and B2C technology products, including AMI's industry leading Aptio® V UEFI BIOS firmware, innovative StorTrends® Network Storage hardware and software products, MegaRAC® remote server management tools and unique solutions based on the popular Android™ and Linux® operating systems._ diff --git a/content/blog/ami-joins-openpower.md b/content/blog/ami-joins-openpower.md new file mode 100644 index 0000000..19fdb32 --- /dev/null +++ b/content/blog/ami-joins-openpower.md @@ -0,0 +1,9 @@ +--- +title: "AMI Joins OpenPOWER" +date: "2015-06-03" +categories: + - "press-releases" + - "blogs" +--- + + diff --git a/content/blog/as-computing-tasks-evolve-infrastructure-must-adapt.md b/content/blog/as-computing-tasks-evolve-infrastructure-must-adapt.md new file mode 100644 index 0000000..4f5b7ec --- /dev/null +++ b/content/blog/as-computing-tasks-evolve-infrastructure-must-adapt.md @@ -0,0 +1,11 @@ +--- +title: "As Computing Tasks Evolve, Infrastructure Must Adapt" +date: "2014-06-11" +categories: + - "industry-coverage" + - "blogs" +--- + +The litany of computing buzzwords has been repeated so often that we’ve almost glazed over: mobile, social, cloud, crowd, big data, analytics.  After a while they almost lose their meaning. + +Taken together, though, they describe the evolution of computing from its most recent incarnation — single user, sitting at a desk, typing on a keyboard, watching a screen, local machine doing all the work — to a much more amorphous activity that involves a whole new set of systems, relationships, and actions. diff --git a/content/blog/attending-openpower-developer-congress.md b/content/blog/attending-openpower-developer-congress.md new file mode 100644 index 0000000..2936735 --- /dev/null +++ b/content/blog/attending-openpower-developer-congress.md @@ -0,0 +1,67 @@ +--- +title: "We’re Attending the OpenPOWER Developer Congress — Here’s Why You Should, Too. Insights from Nimbix, Mellanox, and Xilinx" +date: "2017-05-12" +categories: + - "blogs" +tags: + - "mellanox" + - "xilinx" + - "openpower-foundation" + - "openpower-foundation-developer-congress" + - "opfdevcon17" + - "nimbix" +--- + +Prominent OpenPOWER Foundation members have provided the reasons they’re taking time out of their busy days to support the [OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/) and send their experts and team members. + +This is why YOU should attend too! + +## **Nimbix Enables On-Demand Cloud for Developers** + +### **Why [Nimbix](https://www.nimbix.net/) is Participating in the OpenPOWER Developer Congress** + +As the leading public cloud provider for OpenPOWER and Power systems, Nimbix has embraced its role as a member in the OpenPOWER Foundation. Nimbix enables ISVs to get their applications ported and running on the Power architecture, and feels a responsibility to help the OpenPOWER community. This is what the company signed up for when it became a Silver-level member of the OpenPOWER Foundation. + +Nimbix works to grow the Power ecosystem for application software and broaden the software portfolio on OpenPOWER. It facilitates this by: + +- Providing ISVs and developers a Continuous Integration / Continuous Deployment (CI/CD) pipeline to deploy their source code on Power. +- Providing the ability to not just port, but to test at scale, on a supercomputer in the cloud that runs on OpenPOWER technology. +- Enabling ISVs that decide to go to market with their applications in the cloud to sell those applications directly in the Nimbix cloud. + +### **What is Nimbix Bringing to the Developer Congress?** + +“Nimbix is proud to support the OpenPOWER Developer Congress by providing resources to support Congress activities,” said Leo Reiter, CTO of Nimbix. “Through our support, we will be enabling the on-demand cloud infrastructure for the Congress so that all of the sessions and tracks can do their development in the cloud on the OpenPOWER platform.” + +Leo will be part of the team instructing cloud development and porting to Power tracks at the Congress. "As an OpenPOWER Foundation member,” Leo said,, “I will be working with participants to get their applications running on Power in the cloud and providing them with tips and tools they can use to continue developing OpenPOWER applications post-conference.” + +\[caption id="attachment\_4790" align="aligncenter" width="880"\][![OpenPOWER Developer Congress](images/OPDC-Web-Banner.jpg)](https://openpowerfoundation.org/wp-content/uploads/2017/05/OPDC-Web-Banner.jpg) [Click here to register for the OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/) - May 22-25 in San Francisco.\[/caption\] + +## **Mellanox Educates on Caffe, Chainer, and TensorFlow** + +### **Why [Mellanox](http://www.mellanox.com/) is Participating in the OpenPOWER Developer Congress** + +Mellanox is not only a founding member of the OpenPOWER Foundation, but also a founding member of its Machine Learning Work Group.  AI / cognitive computing will improve our quality of life, drive emerging markets, and surely play a leading role in global economics. But to achieve real scalable performance with AI, being able to leverage cutting-edge interconnect capabilities is paramount. Typical vanilla networking just doesn’t scale, so it’s important that developers are aware of the additional performance that can be achieved by understanding the critical role of the network. + +Because Deep Learning applications are well-suited to exploit the POWER architecture, it is also extremely important to have an advanced network that unlocks the scalable performance of deep learning systems, and that is where the Mellanox interconnect comes in. The benefits of RDMA, ultra-low latency, and In-Network Computing deliver an optimal environment for data-ingest at the critical performance levels required by POWER-based systems. + +Mellanox is committed to working with the industry’s thought leaders to drive technologies in the most open way. Its core audience has always been end users — understanding their challenges and working with them to deliver real solutions. Today, more than ever, the developers, data-centric architects, and data scientists are the new generation of end users that drive the data center. They are defining the requirements of the data center, establishing its performance metrics, and delivering the fastest time to solution by exploiting the capabilities of the OpenPOWER architecture.  Mellanox believes that participating in the OpenPOWER Developer Congress gives the company an opportunity to educate developers on its state-of-art-networking and also demonstrates its commitment to innovation with open development and open standards. + +### **What is Mellanox Bringing to the Developer Congress?** + +Mellanox will provide on-site expertise to discuss the capabilities of Mellanox Interconnect Solutions. Dror Goldenberg, VP of Software Architecture at Mellanox, will be present to further dive into areas of machine learning acceleration and the frameworks that already take advantage of Mellanox capabilities, such as Caffe, Chainer, TensorFlow, and others. + +Mellanox is the interconnect leader in AI / cognitive computing data centers, and already accelerates machine learning frameworks to achieve from 2x to 18x speedup for image recognition, NLP, voice recognition, and more. The company’s goal is to assist developers with their applications to achieve maximum scalability on POWER-based systems. + +## **Xilinx Offers Experts in FPGAs and Machine Learning Algorithms** + +### **Why [Xilinx](https://www.xilinx.com/) is Participating in the OpenPOWER Developer Congress?** + +Xilinx, as a Platinum-level member of the OpenPOWER Foundation, looks forward to supporting the Foundation’s outreach activities. The company particularly likes the format of the upcoming OpenPOWER Developer Congress, because it’s focused on developers and provides many benefits developers will find helpful. + +Xilinx appreciates the unique nature of the Congress, in that it provides developers the opportunity to get up close to the technology and in some cases, work on it directly. It also allows developers to make good connections with other companies who participate in the Congress — something that can be very beneficial as they return to their day-to-day work. + +Companies that choose to participate by providing instruction at the Congress get an opportunity to talk with developers first hand, and receive feedback on their product offerings. Conversely, the developers have an opportunity to provide feedback on products and influence what platforms (everything OpenPOWER) are going to look like as they mature. + +### **What is Xilinx bringing to the Developer Congress?** + +Xilinx will be bringing system architects and solution architects who will work hands-on with developers to create solutions and solve problems. These experts understand both FPGAs and machine learning algorithms, which fits nicely with the OpenPOWER Developer Congress agenda. diff --git a/content/blog/avnet-joins-openpower-foundation.md b/content/blog/avnet-joins-openpower-foundation.md new file mode 100644 index 0000000..c9e1a23 --- /dev/null +++ b/content/blog/avnet-joins-openpower-foundation.md @@ -0,0 +1,33 @@ +--- +title: "Avnet Joins OpenPOWER Foundation" +date: "2015-01-15" +categories: + - "press-releases" + - "blogs" +--- + +PHOENIX, Jan 15, 2015 (BUSINESS WIRE) -- [Avnet, Inc](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=Avnet%2C+Inc&index=1&md5=40a05c1ec12025dc0539a7a8b4ef0803). (NYSE: [AVT](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fir.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=AVT&index=2&md5=65187ddc0108742fc13369e6a37bf5d8)), a leading global technology distributor, today announced that it has joined the OpenPOWER Foundation, an open development alliance based on IBM’s POWER microprocessor architecture. Working with the OpenPOWER Foundation, Avnet will help partners and customers innovate across the full hardware and software stack to build customized server, networking and storage hardware solutions best suited to the high-performance Power architecture. + +The OpenPOWER Foundation was established in 2013 as an open technical membership organization that provides a framework for open innovation at both the hardware and software levels. IBM’s POWER8 processor serves as the hardware foundation, while the system software structure embraces key open source technologies including KVM, Linux and OpenStack. + +“Working with the OpenPOWER Foundation complements Avnet’s long-standing relationship with IBM across the enterprise, from the components level to the data center,” said Tony Madden, Avnet senior vice president, global supplier business executive. “With the accelerated pace of change in technology, membership in the OpenPOWER Foundation provides an excellent avenue for us to work alongside other market leaders to deploy open Power technology, providing customers and partners with the technology infrastructure they need to evolve and grow their businesses.” + +As an OpenPOWER Foundation member, Avnet will provide channel distribution and integration services for OpenPOWER compatible offerings, enabling its partners and customers to focus on innovation, optimizing operational efficiency and enhancing profitability. + +[Click to Tweet](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fctt.ec%2FPia36&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=Click+to+Tweet&index=3&md5=e1f5619ff235ad8e0320d1e3b644bef6): .@Avnet joins #OpenPOWER Foundation [http://bit.ly/1ll33LR](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fbit.ly%2F1ll33LR&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=http%3A%2F%2Fbit.ly%2F1ll33LR&index=4&md5=13bd59e51076a54fb02f3151f471cea4) + +Follow Avnet on Twitter: [@Avnet](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Ftwitter.com%2Favnet&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=%40Avnet&index=5&md5=e43111ddc5cf4e9c106917a235854dfe) + +Connect with Avnet on LinkedIn or Facebook:[https://www.linkedin.com/company/avnet](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Favnet&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Favnet&index=6&md5=bbd3f4d589ef461e25d34e4d47b471e3) or [facebook.com/avnetinc](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.facebook.com%2FAvnetInc&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=facebook.com%2Favnetinc&index=7&md5=5308390807e1e243f6406e2d0b1cc2fa) + +Read more about Avnet on its blogs: [http://blogging.avnet.com/weblog/mandablog/](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fblogging.avnet.com%2Fweblog%2Fmandablog%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=http%3A%2F%2Fblogging.avnet.com%2Fweblog%2Fmandablog%2F&index=8&md5=2c0e6a3e0270a00a96a936beb022ceae) + +**About Avnet, Inc.** + +Avnet, Inc. (NYSE: [AVT](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fir.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=AVT&index=9&md5=cacab694ead8f7e5a00e8889cb04f2fa)), a Fortune 500 company, is one of the largest distributors of electronic components, computer products and embedded technology serving customers globally. Avnet accelerates its partners’ success by connecting the world’s leading technology suppliers with a broad base of customers by providing cost-effective, value-added services and solutions. For the fiscal year ended June 28, 2014, Avnet generated revenue of $27.5 billion. For more information, visit[www.avnet.com](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=www.avnet.com&index=10&md5=b5f2c37f3d7d641a5aaf2ef50d090012). + +All brands and trade names are trademarks or registered trademarks, and are the properties of their respective owners. Avnet disclaims any proprietary interest in marks other than its own. + +SOURCE: Avnet, Inc. + +Avnet, Inc. Joal Redmond, +1 480-643-5528 [joal.redmond@avnet.com](mailto:joal.redmond@avnet.com) or Brodeur Partners, for Avnet, Inc. Marcia Chapman, +1 480-308-0284 [mchapman@brodeur.com](mailto:mchapman@brodeur.com) diff --git a/content/blog/barcelona-supercomputing-center-adds-hpc-expertise-to-openpower.md b/content/blog/barcelona-supercomputing-center-adds-hpc-expertise-to-openpower.md new file mode 100644 index 0000000..1389738 --- /dev/null +++ b/content/blog/barcelona-supercomputing-center-adds-hpc-expertise-to-openpower.md @@ -0,0 +1,22 @@ +--- +title: "Barcelona Supercomputing Center Adds HPC Expertise to OpenPOWER" +date: "2016-10-27" +categories: + - "blogs" +tags: + - "featured" +--- + +_Eduard Ayguadé, Computer Sciences Associate Director at BSC_ + +![Barcelona Supercomputing Center joins OpenPOWER](images/BSC-blue-large-1024x255.jpg) + +The [Barcelona Supercomputing Center](https://www.bsc.es/) (BSC) is Spain’s National Supercomputing facility. Our mission is to investigate, develop and manage information technologies to facilitate scientific progress. It was officially constituted in April 2005 with four scientific departments: Computer Sciences, Computer Applications in Science and Engineering, Earth Sciences and Life Sciences. In addition, the Center’s Operations department manages MareNostrum, one of the most powerful supercomputers in Europe. The activities in these departments are complementary to each other and very tightly related, setting up a multidisciplinary loop: computer architecture, programming models, runtime systems and resource managers, performance analysis tools, algorithms and applications in the above mentioned scientific and engineering areas. + +Joining the OpenPOWER foundation will allow BSC to advance its mission, improving the way we contribute to the scientific and technological HPC community, and at the end, serve society. BSC plans to actively participate in the different working groups in OpenPOWER with the objective of sharing our research results, prototyping implementations and know-how with the other members to influence the design of future systems based on the POWER architecture. As member of OpenPOWER, BSC hopes to gain visibility and opportunities to collaborate with other leading institutions in high performance architectures, programming models and applications. + +In the framework of the current [IBM-BSC Deep Learning Center](https://www.bsc.es/news/bsc-news/bsc-and-ibm-research-deep-learning-center-boost-cognitive-computing) initiative, BSC and IBM will collaborate in research and development projects on the Deep Learning domain, an essential component of cognitive computing, with focus on the development of new algorithms to improve and expand the cognitive capabilities of deep learning systems. Additionally, the center will also do research on flexible computing architectures –fundamental for big data workloads– like data centric systems and applications. + +Researchers at BSC have been working on policies to optimally manage the hardware resources available in POWER-based systems from the runtime system, including prefetching, multithreading degree and energy-securing. These policies are driven by the information provided by the per-task (performance and power) counters available in POWER architectures and control knobs. Researchers at BSC have also been collaborating with the compiler teams at IBM in the implementation and evolution of the [OpenMP programming model](https://www.ibm.com/developerworks/community/groups/service/html/communitystart?communityUuid=8e0d7b52-b996-424b-bb33-345205594e0d) to support accelerators, evaluating new SKV (Scalable Key-Value) storage capabilities on top of novel memory and storage technologies, including bug reporting and fixing, using Smufin, one of the key applications at BSC to support personalized medicine, or exploring NUMA aware placement strategies in POWER architectures to deploy containers based on the workloads characteristics and system state. + +Today, during the [OpenPOWER Summit Europe](https://openpowerfoundation.org/openpower-summit-europe/) in Barcelona, the director of BSC, Prof. Mateo Valero, will present the mission and main activities of the Center and the different departments at the national, European and international level. After that, he will present the work that BSC is conducting with different OpenPOWER members, including IBM, NVIDIA, Samsung, and Xilinx, with a special focus on the BSC and IBM research collaboration in the last 15 years. diff --git a/content/blog/barreleye-g2-zaius-motherboard-openpower-summit.md b/content/blog/barreleye-g2-zaius-motherboard-openpower-summit.md new file mode 100644 index 0000000..8fec338 --- /dev/null +++ b/content/blog/barreleye-g2-zaius-motherboard-openpower-summit.md @@ -0,0 +1,50 @@ +--- +title: "Barreleye G2 and Zaius Motherboard Samples Showcased at the OpenPOWER Summit" +date: "2018-05-14" +categories: + - "blogs" +tags: + - "google" + - "rackspace" + - "openpower-summit" + - "barreleye" + - "zaius" + - "openpower-foundation" +--- + +By Adi Gangidi + +\[caption id="attachment\_5438" align="aligncenter" width="267"\][![Barreleye G2 Accelerator server](images/barreleye-267x300.jpg)](https://openpowerfoundation.org/wp-content/uploads/2018/05/barreleye.jpg) Barreleye G2 Accelerator server\[/caption\] + +Rackspace showcased brand new Zaius PVT motherboard samples and Barreleye G2 servers at the [OpenPOWER Summit](https://opfus2018.sched.com/event/E36g/accelerators-development-update-zaius-barreleye-g2), demonstrating industry leading capabilities. + +## **Collaboration between Google and Rackspace** + +The Zaius/Barreleye G2 OpenPOWER platform was originally [announced](https://blog.rackspace.com/first-look-zaius-server-platform-google-rackspace-collaboration) at the OpenPOWER Summit in 2016 as a collaborative effort between Google and Rackspace. Since then, we have made steady progress on the development of this platform. We’ve navigated through engineering validation and test (EVT), design validation and test (DVT) and made various optimizations to the design resulting in refined solution. + +We continue to [qualify](https://blog.rackspace.com/zaius-barreleye-g2-server-development-update-2) various OpenCAPI/NVLink 2.0 adapters and play with frameworks ([SNAP](https://github.com/open-power/snap)/[PowerAI](https://www.ibm.com/us-en/marketplace/deep-learning-platform)) that enable easy adoption of these adapters. + +## **Zaius motherboard** + +Our Zaius motherboard has just entered the production validation and test stage, which reflects our confidence in this design and our continued effort to bring OpenCAPI/NVLink 2.0/PCIe Gen4 accelerators to datacenters via this server housing IBM Power9 processors. + +\[caption id="attachment\_5439" align="aligncenter" width="625"\][![PVT Zaius Motherboard](images/PVT-1024x651.png)](https://openpowerfoundation.org/wp-content/uploads/2018/05/PVT.png) PVT Zaius Motherboard\[/caption\] + +## **CPU-GPU NVLink 2.0 Interposer Board** + +Also at the OpenPOWER Summit, Rackspace displayed our unique, disaggregated implementation of CPU-GPU NVLink 2.0 interposer board. This board is ideal for artificial intelligence and deep learning applications. + +Further, when combined with PCIe Gen4, we believe the Interposer Board will provide reference in the server industry for solving two bottlenecks: + +1. The slow CPU-GPU link +2. Slow server-to-server network speed + +Both bottlenecks are commonplace today in PCIe Gen3 servers. + +\[caption id="attachment\_5440" align="aligncenter" width="625"\][![SlimSAS – to – SXM2 Interposer for support Volta GPU and FPGA HBM2 Card](images/SlimSAS-1024x445.jpg)](https://openpowerfoundation.org/wp-content/uploads/2018/05/SlimSAS.jpg) SlimSAS – to – SXM2 Interposer for support Volta GPU and FPGA HBM2 Card\[/caption\] + +Conference attendees also saw first-in-industry technology demos from Rackspace, including a demo of the world’s first production-ready PCIe Gen4 NVM Express System. You can read about that [here](https://openpowerfoundation.org/blogs/openpower-pcie/). + +Rackspace expects to do limited access customer betas later this year, based on Barreleye G2 Accelerator servers. + +Customers interested in participating, please reach out by emailing [hardware-engineering@lists.rackspace.com](mailto:hardware-engineering@lists.rackspace.com). diff --git a/content/blog/big-data-and-ai-collaborative-research-and-teaching-initiatives-with-openpower.md b/content/blog/big-data-and-ai-collaborative-research-and-teaching-initiatives-with-openpower.md new file mode 100644 index 0000000..6bef8c2 --- /dev/null +++ b/content/blog/big-data-and-ai-collaborative-research-and-teaching-initiatives-with-openpower.md @@ -0,0 +1,50 @@ +--- +title: "Big Data and AI: Collaborative Research and Teaching Initiatives with OpenPOWER" +date: "2020-02-13" +categories: + - "blogs" +tags: + - "ibm" + - "power" + - "hpc" + - "big-data" + - "summit" + - "ai" + - "oak-ridge-national-laboratory" +--- + +[Arghya Kusum Das](https://www.linkedin.com/in/arghya-kusum-das-567a4761/), Ph.D., Asst. Professor, UW-Platteville + +![](images/Blog-Post_2.19.20.png) + +In the Department of Computer Science and Software Engineering (CSSE) at the University of Wisconsin at Platteville, I work closely with hardware system designers to improve the quality of the institute’s research and teaching. + +Recently, I have engaged with the OpenPOWER community to improve research efforts and also to help build collaborative education platforms. + +## **Accelerating Research on POWER** + +As a collaborative academic partner with the OpenPOWER Foundation, I have participated and led sessions at various OpenPOWER Academic workshops. These workshops gave me an opportunity to learn about various features around OpenPOWER and also provided great networking opportunities with many research organizations and customers. + +As part of this, I submitted a research proposal to [Oak Ridge National Laboratory](https://www.ornl.gov/) for allocation in the Summit supercomputing cluster to accelerate my research. With this allocation, I focus on accurate, de novo assembly and binning of metagenomic sequences, which can become quite complex with multiple genomes in mixed sequence reads. The computation process is also challenged by the huge volume of the datasets. + +Our assembly pipeline involves two major steps. First, a de Bruijn graph-based de novo assembly and second, binning the whole genomes to operational taxonomic units utilizing deep learning techniques. In conjunction with large data sets, these deep learning technologies and scientific methods for big data genome analysis demand more compute cycles per processor than ever before. Extreme I/O performance is also required. + +The final goal of this project is to accurately assemble terabyte-scale metagenomic leveraging IBM Power9 technology along with Nvidia GPU and NVLink. + +## **Building a Collaborative Future** + +One of our collaborative visions is to spread the HPC education to meet the worldwide need for experts in corresponding fields. As a part of this vision, I recognized the importance of online education and started working on a pilot project to develop an innovative, online course curriculum for these cutting-edge domains of technology. + +To further facilitate these visions, I’m also working on developing a collaborative, online education platform where students can receive lectures and deepen their theoretical knowledge, but also get hands-on experience in cutting edge infrastructure. + +I’m interested in collaboration with bright minds including faculties, students and professionals to materialize this online education goal. + +## **Future Workshops and Hackathons** + +As a part of this collaborative initiative, I plan to organize big data workshops and hackathons, which will provide a forum for disseminating the latest research, as well as provide a platform for students to get hands-on learning and engage in practical discussion about big data and AI-based technologies. + +The first of these planned events is the OpenPOWER Big Data and AI workshop taking place on April 7th, 2020. Attendees will hear about IBM and OpenPOWER partnerships, cutting-edge research on big data, AI, and HPC, including outreach, industry research, and other initiatives. + +You can register for the workshop [**here**](https://www.uwplatt.edu/big-data-ai). + +Can’t wait to see you there! diff --git a/content/blog/blog-it-powers-new-business-models.md b/content/blog/blog-it-powers-new-business-models.md new file mode 100644 index 0000000..57babd7 --- /dev/null +++ b/content/blog/blog-it-powers-new-business-models.md @@ -0,0 +1,10 @@ +--- +title: "Blog | IT powers new business models" +date: "2014-07-02" +categories: + - "blogs" +--- + +People and businesses today are rapidly adopting new technologies and devices that are transforming the way they interact with each other and their data. + +This digital transformation generates 2.5 quintillion bytes of data associated with the proliferation of mobile devices, social media and cloud computing, and drives tremendous growth opportunity. diff --git a/content/blog/blogpost1.md b/content/blog/blogpost1.md new file mode 100644 index 0000000..025bb63 --- /dev/null +++ b/content/blog/blogpost1.md @@ -0,0 +1,16 @@ +--- +title: "Members can now request early access to Tyan reference board" +date: "2014-07-10" +categories: + - "blogs" +tags: + - "openpower" + - "power8" + - "tyan" + - "atx" + - "debian" +--- + +![Tyan reference Board](images/Tyan-reference-Board-300x180.jpg) We are excited with the progress that the OpenPOWER Foundation member companies have made since our public launch in San Francisco back in April. Members can now request early access to the Tyan reference board shown below by emailing [Bernice Tsai](mailto:bernice.tsai@mic.com.tw) at Tyan. This is a single socket, ATX form factor, Power8 Motherboard that can members can bring up an [Debian Linux Distribution](https://wiki.debian.org/ppc64el) (Little Endian) to start innovating with. We look forward to seeing the great ideas that will be generated by working together! + +Gordon diff --git a/content/blog/brocade-mobile-world-congress.md b/content/blog/brocade-mobile-world-congress.md new file mode 100644 index 0000000..0fb29e2 --- /dev/null +++ b/content/blog/brocade-mobile-world-congress.md @@ -0,0 +1,24 @@ +--- +title: "New OpenPOWER Member Brocade Showcases Work at Mobile World Congress" +date: "2016-02-19" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Brian Larsen, Director, Partner Business Development, Brocade![logo-brocade-black-red-rgb](images/logo-brocade-black-red-rgb.jpg)_ + +In my 32 year career in the IT industry there has never been a better time to embrace the partnership needed to meet client requirements, needs and expectations.  Brocade has built its business on partnering with suppliers who deliver enterprise class infrastructure in all the major markets. This collaborative mindset is what led us to the OpenPOWER Foundation, where an eco-system of over 180 vendors, suppliers, and researchers can build new options for client solutions. + +Brocade recognizes that OpenPOWER platforms are providing choice and with that choice comes the need to enable those platforms with the same networking capabilities that users are familiar with.  If you have been in a cave for the last eight years, you may not know that Brocade has broken out of its mold of being a fibre channel switch vendor and now supports a portfolio of IP networking platforms along with innovative solutions in Software Defined Networking (SDN) and Network Function Virtualization (NFV). Our work will allow our OpenPOWER partners to design end to end solutions that include both storage and IP networked solutions.  Use cases for specific industries can be developed for high-speed network infrastructure for M2M communication or compute to storage requirements.  As target use cases evolve, networking functionality could transform from a physical infrastructure to a virtual architecture where the compute platform is a critical & necessary component. + +![OpenPOWER Venn Diagram](images/OpenPOWER-Venn-Diagram.jpg) + +The OpenPOWER Foundation’s [membership has exploded](https://openpowerfoundation.org/membership/current-members/) since its inception and is clearly making a mark on new data center options for users who expect peak performance to meet today’s demanding IT needs.  As Brocade’s SVP and GM of Software Networking, Kelly Herrell says, “OpenPOWER processors provide innovation that powers datacenter and cloud workloads”.  Enterprise Datacenters and Service Providers (SP) markets are key areas of focus for Brocade and by delivering on its [promise of the “New IP”](http://bit.ly/1Oiu13z) businesses will be able to transition to more automation, accelerated service delivery and new revenue opportunities. + +Brocade will be at [Mobile World Congress](https://www.mobileworldcongress.com/) in Barcelona and [IBM’s InterConnect Conference](http://ibm.co/1KsWIzQ) in Las Vegas from February 22-25th, come see us and let us show you the advantages of being an eco-system partner with us. + +* * * + +_![Brian Larsen Brocade](images/Brian-Larsen-Brocade-150x150.jpg)Brian Larsen joined Brocade in July 1991 and has more than 29 years of professional experience in high-end processing, storage, disaster recovery, Cloud, virtualization and networking environments. Larsen is the Director of Partner Business Development Executive responsible for solution and business development within all IBM divisions. For the last 5 years, he has focused on both service provider and enterprise markets with specific focus areas in: Cloud, Virtualization, Software Defined Networking (SDN), Network Function Virtualization (NFV), Software Defined Storage (SDS) and Analytics solutions._ diff --git a/content/blog/canonical-supporting-ibm-power8-for-ubuntu-cloud-big-data.md b/content/blog/canonical-supporting-ibm-power8-for-ubuntu-cloud-big-data.md new file mode 100644 index 0000000..3dd1fb5 --- /dev/null +++ b/content/blog/canonical-supporting-ibm-power8-for-ubuntu-cloud-big-data.md @@ -0,0 +1,8 @@ +--- +title: "Canonical Supporting IBM POWER8 for Ubuntu Cloud, Big Data" +date: "2014-06-27" +categories: + - "blogs" +--- + +If Ubuntu Linux is to prove truly competitive in theOpenStack cloud and Big Data worlds, it needs to run on more than x86 hardware. And that's whatCanonical achieved this month, with the announcement of full support for IBM POWER8machines on Ubuntu Cloud and Ubuntu Server. diff --git a/content/blog/capi-and-flash-for-larger-faster-nosql-and-analytics.md b/content/blog/capi-and-flash-for-larger-faster-nosql-and-analytics.md new file mode 100644 index 0000000..648ba71 --- /dev/null +++ b/content/blog/capi-and-flash-for-larger-faster-nosql-and-analytics.md @@ -0,0 +1,81 @@ +--- +title: "Using CAPI and Flash for larger, faster NoSQL and analytics" +date: "2015-09-25" +categories: + - "blogs" +tags: + - "openpower" + - "power8" + - "featured" + - "capi" + - "big-data" + - "databases" + - "ubuntu" + - "redis-labs" + - "capi-series" +--- + +_By Brad Brech, Distinguished Engineer, IBM Power Systems Solutions_ + +## [![CAPI Flash Benefits Infographic](images/CAPI_Flash_Infographic-475x1024.jpg)](http://ibm.co/1FxOPq9)Business Challenge + +Suppose you’re a game developer with a release coming up. If things go well, your user base could go from zero to hundreds of thousands in no time. And these gamers expect your app to capture and store their data, so the game always knows who's playing and their progress in the game, no matter where they log in. You’re implementing an underlying database to serve these needs. + +Oh—and you’ve got to do that without adding costly DRAM to existing systems, and without much of a budget to build a brand-new large shared memory or distributed multi-node database solution. Don’t forget that you can’t let your performance get bogged down with IO latency from a traditionally attached flash storage array. + +More and more, companies are choosing NoSQL over traditional relational databases. NoSQL offers simple data models, scalability, and exceptionally speedy access to in-memory data. Of particular interest to companies running complex workloads is NoSQL's high availability for key value stores (KVS) like [Redis](https://redislabs.com/solutions-redis-labs-on-power) and MemcacheDB, document stores such as mongoDB and couchDB, and column stores Cassandra and BigTable. + +## Computing Challenge + +NoSQL isn't headache-free. + +Running NoSQL workloads fast enough to get actionable insights from them is expensive and complex. That requires your business either to invest heavily in a shared-memory system or to set up a multi-node networked solution that adds complexity and latency when accessing your valuable data. + +Back to our game developer and their demanding gamers. As the world moves to the cloud, developers need to offer users rapid access to online content, often tagged with metadata. Metadata needs low response times as it is constantly being accessed by users. NoSQL provides flexibility for content-driven applications to not only provide fast access to data but also store diverse data sets. That makes our game developer an excellent candidate for using CAPI-attached Flash to power a NoSQL database. + +## The Solution + +Here's where CAPI comes in. Because CAPI allows you to attach devices with memory coherency at incredibly low latency, you can use CAPI to affix flash storage that functions more like extended block system memory for larger, faster NoSQL. Coming together, OpenPOWER Foundation technology innovators including [Redis Labs](https://redislabs.com/solutions-redis-labs-on-power), [Canonical](https://insights.ubuntu.com/2014/10/10/ubuntu-with-redis-labs-altera-and-ibm-power-supply-new-nosql-data-store-solution/), and [IBM](http://ibm.co/1FxOPq9) created this brilliant new deployment model, and they built [Data Engine for NoSQL](http://ibm.co/1FxOPq9)—one of the first commercially available CAPI solutions. + +CAPI-attached flash enables great things. By CAPI-attaching a 56 TB flash storage array to the POWER8 CPU via an FPGA, the application gets direct access to a large flash array with reduced I/O latency and overhead compared to standard I/O-attached flash. End-users can: + +- _Create a fast path to a vast store of memory_ +- _Reduce latency by cutting the number of code instructions to retrieve data from 20,000 to as low as 2000, by eliminating I/O overhead[1](#_ftn1)_ +- _Increase performance by increasing bandwidth by up to 5X on a per-thread basis[1](#_ftn1)_ +- _Lower deployment costs by 3X through massive infrastructure consolidation[2](#_ftn2)_ +- _Cut TCO with infrastructure consolidation by shrinking the number of nodes needed from 24 to 1[2](#_ftn2)_ + + + +## Get Started with Data Engine for NoSQL + +Getting started is easy, and our goal is to provide you with the resources you need to begin. This living list will continue to evolve as we provide you with more guidance, information, and use cases, so keep coming back to be sure you can stay up to date. + +### Learn more about the Data Engine for NoSQL: + +- [Data Engine for NoSQL Solution Brief](http://ibm.co/1KTPS44) +- [Data Engine for NoSQL Whitepaper](http://ibm.co/1izYfXN) + +### Deploy Data Engine for NoSQL: + +- [Contact IBM about Data Engine for NoSQL](http://ibm.co/1FxOPq9) to build the Data Engine for NoSQL configuration for you +- [Get community support](http://ibm.co/1VeInq6) for your solutions and share results with your peers on the [CAPI Developer Community](http://ibm.co/1VeInq6) +- Reach out to the OpenPOWER Foundation community on [Twitter](https://twitter.com/intent/tweet?screen_name=OpenPOWERorg&text=CAPI-Flash%20enables%20me%20to), [Facebook](https://www.facebook.com/openpower), and [LinkedIn](https://www.linkedin.com/grp/home?gid=7460635) along the way + +Keep coming to see blog posts from IBM and other OpenPOWER Foundation partners on how you can use CAPI to accelerate computing, networking and storage. + +- [CAPI Series 1: Accelerating Business Applications in the Data-Driven Enterprise with CAPI](https://openpowerfoundation.org/blogs/capi-drives-business-performance/) +- [CAPI Series 3: Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI](https://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/) +- [CAPI Series 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/) + +  + +* * * + +**_![BradBrech](images/BradBrech.jpg)About Brad Brech_** + +_Brad Brech is a Distinguished Engineer and the CTO of POWER Solutions in the IBM Systems Division. He is currently focused on POWER and OpenPOWER and solutions and is the Chief Engineer for the CAPI attached Flash solution enabler. His responsibilities include technical strategy, solution identification, and working delivery strategies with solutions teams. Brad is an IBM Distinguished Engineer, a member of the IBM Academy of Technology and past Board member of The Green Grid._ + +[\[1\]](#_ftnref1)Based on performance analysis comparing typical I/O Model flow (PCIe) to CAPI Attached Coherent Model flow. + +[\[2\]](#_ftnref2) Based on competitive system configuration cost comparisons by IBM and Redis Labs. diff --git a/content/blog/capi-drives-business-performance.md b/content/blog/capi-drives-business-performance.md new file mode 100644 index 0000000..ee4e457 --- /dev/null +++ b/content/blog/capi-drives-business-performance.md @@ -0,0 +1,75 @@ +--- +title: "Accelerating Business Applications in the Data-Driven Enterprise with CAPI" +date: "2015-09-10" +categories: + - "blogs" +tags: + - "openpower" + - "power" + - "featured" + - "capi" + - "acceleration" + - "fpga" + - "performance" + - "capi-series" +--- + +_By Sumit Gupta, VP, HPC & OpenPOWER Operations at IBM_ _This blog is part of a series:_ _[Pt 2: Using CAPI and Flash for larger, faster NoSQL and analytics](https://openpowerfoundation.org/blogs/capi-and-flash-for-larger-faster-nosql-and-analytics/)_ _[Pt 3: Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI](https://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/)_ _[Pt 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/)_ + +Every 48 hours, the world generates as much data as it did from the beginning of recorded history through 2003. + +The monumental increase in the flow of data represents an untapped source of insight for data-driven enterprises, and drives increasing pressure on computing systems to endure and analyze it. But today, just raising processor speeds isn't enough. The data-driven economy demands a computing model that delivers equally data-driven insights and breakthroughs at the speed the market demands. + +[![CAPI Logo](images/CAPITechnology_Color_Gradient_Stacked_-300x182.png)](http://ibm.co/1MVbP5d)OpenPOWER architecture includes a technology called Coherent Accelerator Processor Interface (CAPI) that enables systems to crunch through the high volume of data by bringing compute and data closer together. CAPI is an interface that enables close integration of devices with the POWER CPU and gives coherent access to system memory. CAPI allows system architects to deploy acceleration in novel ways for an application and allow them to rethink traditional system designs. + +\[caption id="attachment\_1982" align="aligncenter" width="625"\][![CAPI-attached vs. traditional acceleration](images/IBMNR_OPF_CAPI_BlogPost1_Image-02-1024x531.jpg)](http://ibm.co/1MVbP5d) CAPI allows attached accelerators to deeply integrate with POWER CPUs\[/caption\] + +CAPI-attached acceleration has three pillars: accelerated computing, accelerated storage, and accelerated networking. Connected coherently to a POWER CPU to give them direct access to the CPU’s system memory, these techniques leverage accelerators like FPGAs and GPUs, storage devices like flash, and networking devices like Infiniband.   These devices, connected via CAPI, are programmable using simple library calls that enable developers to modify their applications to more easily take advantage of accelerators, storage, and networking devices. The CAPI interface is available to members of the OpenPOWER foundation and other interested developers, and enables a rich ecosystem of data center technology providers to integrate tightly with POWER CPUs to accelerate applications. + +## **What can CAPI do?** + +CAPI has had an immediate effect in all kinds of industries and for all kinds of clients: + +- **[Healthcare](http://bit.ly/1WiV6KD):** Create customized cancer treatment plans personalized to an individual’s unique genetic make-up. +- **Image and video processing:** Facial expression recognition that allows retailers to analyze the facial reactions their shoppers have to their products. +- [**Database acceleration and fast storage**](http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=PM&subtype=SP&htmlfid=POS03135USEN&appname=TAB_2_2_Appname#loaded)**:** accelerate the performance of flash storage to allow users to search databases in near real-time for a fraction of the cost. +- **[Risk Analysis in Finance](http://bit.ly/1N7UQMY):** Allow financial firms to monitor their risk in real-time with greater accuracy. + +## **The CAPI Advantage** + +CAPI can be used to: + +- **Accelerate Compute** by leveraging a CAPI-attached FPGA to run, for example, Monte Carlo analysis or perform Vision Processing. The access to the IBM POWER CPU’s memory address space is a programmer's dream. +- **Accelerate Storage** by using CAPI to attach flash that can be written to as a massive memory space instead of storage---a process that slashes latency compared to traditional storage IO. +- **Accelerate Networking** by deploying CAPI-attached network accelerators for faster, lower latency edge-of-network analytics. + +The intelligent and close integration enabled by CAPI with IBM POWER CPUs removes much of the latency associated with the I/O bus on other platforms (PCI-E). It also makes the accelerator a peer to the POWER CPU cores, allowing it to access the accelerator natively.  Consequently, a very small investment can help your system perform better than ever. + +https://www.youtube.com/watch?v=h1SE48\_aHRo + +## **Supported by the OpenPOWER Foundation Community** + +We often see breakthroughs when businesses open their products to developers, inviting them to innovate. To this end IBM helped create the OpenPOWER Foundation, now with 150 members, dedicated to innovating around the POWER CPU architecture. + +IBM and many of our Foundation partners are committed to developing unique, differentiated solutions leveraging CAPI. Many more general and industry-specific solutions are on the horizon. By bringing together brilliant minds from our community of innovators, the possibilities for customers utilizing CAPI technology are endless. + +## **Get Started with CAPI** + +Getting started with CAPI is easy, and our goal is to provide you with the resources you need to begin. This living list will continue to evolve as we provide you with more guidance, information, and use cases, so keep coming back to be sure you can stay up to date. + +1. Learn more about CAPI: + - [Coherent Accelerator Processor Interface (CAPI) for POWER8 Systems](http://ibm.co/1MVbP5d) +2. Get the developer kits: + - [Alpha Data CAPI Developer Kit](http://bit.ly/1F1hzqW) + - [Nallatech CAPI Developer Kit](http://bit.ly/1OieWTK) +3. Get support for your solutions and share results with your peers on the [CAPI Developer Community](http://ibm.co/1XSQtZC) + +Along the way reach out to us on [Twitter](https://twitter.com/OpenPOWERorg), [Facebook](https://www.facebook.com/openpower?fref=ts), and [LinkedIn](https://www.linkedin.com/grp/home?gid=7460635). + +_This blog is part of a series:_ _[Pt 2: Using CAPI and Flash for larger, faster NoSQL and analytics](https://openpowerfoundation.org/blogs/capi-and-flash-for-larger-faster-nosql-and-analytics/)_ _[Pt 3: Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI](https://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/)_ _[Pt 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/)_ + +* * * + +**_[![Sumit Gupta](images/sumit-headshot.png)](https://openpowerfoundation.org/wp-content/uploads/2015/09/sumit-headshot.png)About Sumit Gupta_** + +_Sumit Gupta is Vice President, High Performance Computing (HPC) Business Line Executive and OpenPOWER Operations. With more than 20 years of experience, Sumit is a recognized industry expert in the fields of HPC and enterprise data center computing. He is responsible for business management of IBM's HPC business and for operations of IBM's OpenPOWER initiative._ diff --git a/content/blog/capi-snap-simple-developers.md b/content/blog/capi-snap-simple-developers.md new file mode 100644 index 0000000..4e92231 --- /dev/null +++ b/content/blog/capi-snap-simple-developers.md @@ -0,0 +1,74 @@ +--- +title: "CAPI SNAP: The Simplest Way for Developers to Adopt CAPI" +date: "2016-11-03" +categories: + - "capi-series" + - "blogs" +tags: + - "featured" +--- + +_By Bruce Wile, CAPI Chief Engineer and Distinguished Engineer, IBM Power Systems_ + +Last week at OpenPOWER Summit Europe, [we announced a brand-new Framework](https://openpowerfoundation.org/blogs/openpower-makes-fpga-acceleration-snap/) designed to make it easy for developers to begin using CAPI to accelerate their applications. The CAPI Storage, Network, and Analytics Programming Framework, or CAPI SNAP, was developed through a multi-company effort from OpenPOWER members and is now in alpha testing with multiple early adopter partners. + +But what exactly puts the “snap” in CAPI SNAP? To answer that, I wanted to give you all a deeper look into the magic behind CAPI SNAP.  The framework extends the CAPI technology through the simplification of both the API (call to the accelerated function) and the coding of the accelerated function.  By using CAPI SNAP, your application gains performance through FPGA acceleration and because the compute resources are closer to the vast amounts of data. + +## A Simple API + +ISVs will be particularly interested in the programming enablement in the framework. The framework API makes it a snap for an application to call for an accelerated function. The innovative FPGA framework logic implements all the computer engineering interface logic, data movement, caching, and pre-fetching work—leaving the programmer to focus only on the accelerator functionality. + +Without the framework, an application writer must create a runtime acceleration library to perform the tasks shown in Figure 1. + +\[caption id="attachment\_4299" align="aligncenter" width="762"\]![Figure 1: Calling an accelerator using the base CAPI hardware primitives](images/CAPI-SNAP-1.png) Figure 1: Calling an accelerator using the base CAPI hardware primitives\[/caption\] + +But now with CAPI SNAP, an application merely needs to make a function call as shown in Figure 2. This simple API has the source data (address/location), the specific accelerated action to be performed, and the destination (address/location) to send the resulting data. + +\[caption id="attachment\_4300" align="aligncenter" width="485"\]![Figure 2: Accelerated function call with CAPI SNAP](images/CAPI-SNAP-2.png) Figure 2: Accelerated function call with CAPI SNAP\[/caption\] + +The framework takes care of moving the data to the accelerator and putting away the results. + +## Moving the Compute Closer to the Data + +The simplicity of the API parameters is elegant and powerful. Not only can source and destination addresses be coherent system memory locations, but they can also be attached storage, network, or memory addresses. For example, if a framework card has attached storage, the application could source a large block (or many blocks) of data from storage, perform an action such as a search, intersection, or merge function on the data in the FPGA, and send the search results to a specified destination address in main system memory. This method has large performance advantages compared to the standard software method as shown in Figure 3. + +\[caption id="attachment\_4301" align="aligncenter" width="625"\]![Figure 3: Application search function in software (no acceleration framework)](images/CAPI-SNAP-3-1024x538.png) Figure 3: Application search function in software (no acceleration framework)\[/caption\] + +Figure 4 shows how the source data flows into the accelerator card via the QSFP+ port, where the FPGA performs the search function. The framework then forwards the search results to system memory. + +\[caption id="attachment\_4302" align="aligncenter" width="625"\]![Figure 4: Application with accelerated framework search engine](images/CAPI-SNAP-4-1024x563.png) Figure 4: Application with accelerated framework search engine\[/caption\] + +The performance advantages of the framework are twofold: + +1. By moving the compute (in this case, search) closer to the data, the FPGA has a higher bandwidth access to storage. +2. The accelerated search on the FPGA is faster than the software search. + +Table 1 shows a 3x performance improvement between the two methods. By moving the compute closer to the data, the FPGA has a much higher ingress (or egress) rate versus moving the entire data set into system memory. + +\[table id=19 /\] + +## Simplified Programming of Acceleration Actions + +The programming API isn’t the only simplification in CAPI SNAP. The framework also makes it easy to program the “action code” on the FPGA. The framework takes care of retrieving the source data (whether it’s in system memory, storage, networking, etc) as well as sending the results to the specified destination. The programmer, writing in a high-level language such as C/C++ or Go, needs only to focus on their data transform, or “action.” Framework compatible compilers translate the high-level language to Verilog, which in turn gets synthesized using Xilinx’s Vivado toolset. + +With CAPI SNAP, the accelerated search code (searching for one occurrence) is this simple: + +for(i=0; i < Search.text\_size; i++){ + +                                  if ((buffer\[i\] == Search.text\_string)) { + +                                                Search.text\_found\_position = i; + +                                 } + +                 } + +The open source release will include multiple, fully functional example accelerators to provide users with the starting points and the full port declarations needed to receive source data and return destination data. + +## Make CAPI a SNAP + +Are you looking to explore CAPI SNAP for your organization’s own data analysis? Then apply to be an early adopter of CAPI SNAP by emailing us directly at [capi@us.ibm.com](mailto:capi@us.ibm.com). Be sure to include your name, organization, and the type of accelerated workloads you’d like to explore with CAPI SNAP. + +You can also read more about CAPI and its capabilities in the accelerated enterprise in our [CAPI series on the OpenPOWER Foundation blog](https://openpowerfoundation.org/blogs/capi-drives-business-performance/). + +You will continue to see a drumbeat of activity around the framework, as we release the source code and add more and more capabilities in 2017. diff --git a/content/blog/cdac-hpc-education.md b/content/blog/cdac-hpc-education.md new file mode 100644 index 0000000..9477e3d --- /dev/null +++ b/content/blog/cdac-hpc-education.md @@ -0,0 +1,36 @@ +--- +title: "India’s Centre for Development of Advanced Computing Joins OpenPOWER to Spread HPC Education" +date: "2016-03-07" +categories: + - "blogs" +--- + +_By Dr. VCV Rao and Mr. Sanjay Wandheker_ + +[![CDAC Logo](images/cdac.preview-300x228.png)](https://openpowerfoundation.org/wp-content/uploads/2016/03/cdac.preview.png) + +An open ecosystem relies on collaboration to thrive, and at the Centre for Advanced Computing (C-DAC), we fully embrace that belief. + +C-DAC is a pioneer in several advanced areas of IT and electronics, and has always been a proactive supporter of technology innovation. It is currently engaged in several national ICT (Information and Communication Technology) projects of critical value to the Indian and global nations, and C-DAC’s thrust on technology innovation has led to the creation of an ecosystem for the coexistence of multiple technologies, today, on a single platform. + +## Driving National Technology Projects + +Within this ecosystem, C-DAC is working on strengthening national technological capabilities in the context of global developments around advanced technologies like high performance computing and grid computing, multilingual computing, software technologies, professional electronics, cybersecurity and cyber forensics, and health informatics. + +C-DAC is also focused on technology education and training, and offers several degree programs including our HPC-focused _C-DAC Certified HPC Professional Certification Programme (CCHPCP)_. We also provide advanced computing diploma programs through the Advanced Computing Training Schools (ACTS) located all over India. + +One of C-DAC’s critical projects is the “National Supercomputing Mission (NSM): Building Capacity and Capability“, the goal of which is to create a shared environment of the advancements in information technology and computing which impact the way people lead their lives. + +## Partnering with OpenPOWER + +[![CDACStudents](images/maxresdefault-1024x768.jpg)](https://openpowerfoundation.org/wp-content/uploads/2016/03/maxresdefault.jpg) + +The OpenPOWER Foundation makes for an excellent partner in this effort, and through our collaboration, we hope to further strengthen supercomputing access and education by leveraging the OpenPOWER Foundation’s growing ecosystem and technology. And with OpenPOWER, we will develop and refine HPC coursework and study materials to skill the next generation of HPC programmers on OpenPOWER platforms with GPU accelerators. + +In addition, CDAC is eager to explore the potential of OpenPOWER hardware and software in addressing some of our toughest challenges. OpenPOWER offers specific technology features in HPC research which include IBM XLF Compliers, ESSL libraries, hierarchical memory features with good memory bandwidth per socket, IO bandwidth, the CAPI interfaces with performance gain over PCIe and the potential of POWER8/9 with NVIDIA GPUs. These OpenPOWER innovations will provide an opportunity to understand performance gains for a variety of applications in HPC and Big Data. + +## Come Join Us + +We’re very eager to move forward, focusing on exposure to new HPC tools on OpenPOWER-driven systems. CDAC plans to be an active member of the OpenPOWER community by making HPC software for science and engineering applications an open source implementation available on OpenPOWER systems with GPU acceleration. + +To learn more about CDAC and to get involved in our work with OpenPOWER, visit us online at [www.cdac.in](http://www.cdac.in). If you would like to learn more about our educational offerings and coursework, go to [http://bit.ly/1Sgp4ix](http://bit.ly/1Sgp4ix). diff --git a/content/blog/center-of-accelerated-application-readiness-preparing-applications-for-summit.md b/content/blog/center-of-accelerated-application-readiness-preparing-applications-for-summit.md new file mode 100644 index 0000000..2809049 --- /dev/null +++ b/content/blog/center-of-accelerated-application-readiness-preparing-applications-for-summit.md @@ -0,0 +1,22 @@ +--- +title: "Center of Accelerated Application Readiness: Preparing applications for Summit" +date: "2015-03-18" +categories: + - "blogs" +--- + +### Abstract + +The hybrid CPU-GPU architecture is one of the main tracks for dealing with the power limitations imposed on high performance computing systems. It is expected that large leadership computing facilities will, for the foreseeable future, deploy systems with this design to address science and engineering challenges for government, academia, and industry. Consistent with this trend, the U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to the Oak Ridge National Laboratory (ORNL) in 2017. This new supercomputer, named Summit, will provide on science applications at least five times the performance of Titan, the OLCF's current hybrid CPU+GPU leadership system, and become the next peak in leadership-class computing systems for open science. In response to a call for proposals, the OLCF has selected and will be partnering with science and engineering application development teams for the porting and optimization of their applications and carrying out a science campaign at scale on Summit. + +### Speaker Organization + +National Center for Computational Sciences Oak Ridge National Laboratory Oak Ridge, TN, USA + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/04/20150318-GTC.pdf) + +Back to Summit Details diff --git a/content/blog/chelsio-joins-openpower-foundation.md b/content/blog/chelsio-joins-openpower-foundation.md new file mode 100644 index 0000000..ad54929 --- /dev/null +++ b/content/blog/chelsio-joins-openpower-foundation.md @@ -0,0 +1,29 @@ +--- +title: "Chelsio Joins OpenPOWER Foundation" +date: "2014-11-06" +categories: + - "press-releases" + - "blogs" +--- + +SUNNYVALE, Calif., Nov. 6, 2014 ­/PRNewswire/ -- Chelsio Communications, the leading provider of 40Gb Ethernet (40GbE) Unified Wire Adapters and ASICs, announced today that it has joined the OpenPOWER Foundation, expanding the open technical community for collaboration on the POWER architecture. + +"We are proud to join the growing members of the OpenPOWER Foundation, in the open development of the POWER systems architecture. Chelsio has long been at the forefront of advanced networking ASIC design, from its first POWER systems design wins through to today's leading Terminator 5 (T5) Unified Wire adapters," said Kianoosh Naghshineh, CEO, Chelsio Communications. + +With its proven Terminator ASIC technology designed in more than 100 OEM platforms and the successful deployment of more than 750,000 ports, Chelsio has enabled unified wire solutions for LAN, SAN and cluster traffic. With its unique ability to fully offload TCP, iSCSI, FCoE and iWARP RDMA protocols on a single chip, Chelsio adapter cards remove the burden of communications responsibilities and processing overhead from servers and storage systems, resulting in a dramatic increase in application performance. The added advantage of traffic management and quality of service (QoS) running at 40Gbps line rate ensures today's Big Data, Cloud, and enterprise data centers run efficiently at high performance. + +"The OpenPOWER Foundation is changing the game, driving innovation and ultimately offering more choices in the industry," said Brad McCredie, President, OpenPOWER Foundation. "We look forward to the participation of Chelsio Communications and their contributions toward creating innovative and winning solutions based on POWER architecture." + +Chelsio T5 Unified Wire Adapters Chelsio Unified Wire Adapters, based upon the fifth generation of its high performance Terminator (T5) ASIC, are designed for data, storage and high performance clustering applications. + +Read more about the [Chelsio T5 Unified Wire Adapters.](http://www.chelsio.com/nic/t5-unified-wire-adapters/) + +About Chelsio Communications, Inc. Chelsio Communications is leading the convergence of networking, storage and clustering interconnects and I/O virtualization with its robust, high-performance and proven Unified Wire technology. Featuring a highly scalable and programmable architecture, Chelsio is shipping multi-port 10 Gigabit Ethernet (10GbE) and 40GbE adapter cards, delivering the low latency and superior throughput required for high-performance compute and storage applications. For more information, visit the company online at www.chelsio.com. + +All product and company names herein are trademarks of their registered owners. + +Logo - [http://photos.prnewswire.com/prnh/20130611/SF30203LOGO](http://photos.prnewswire.com/prnh/20130611/SF30203LOGO) + +SOURCE Chelsio Communications + +RELATED LINKS [http://www.chelsio.com](http://www.chelsio.com) diff --git a/content/blog/china-power-technology-alliance-cpta.md b/content/blog/china-power-technology-alliance-cpta.md new file mode 100644 index 0000000..19e2d1f --- /dev/null +++ b/content/blog/china-power-technology-alliance-cpta.md @@ -0,0 +1,20 @@ +--- +title: "China POWER Technology Alliance (CPTA)" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Objective + +The objective is to position China POWER Technology Alliance (CPTA) as a mechanism to help global OpenPOWER Foundation members engage with China organizations on POWER-based implementations in China. + +### Abstract + +OpenPOWER ecosystem has grown fast in China Market with 12 OPF members growth in 2014. China POWER Technology Alliance was established in Oct. 2014, led by China Ministry of Industry and Information Technology (MIIT), in order to accelerate the speed of China secured and trusted IT industry chain building, by leveraging OpenPOWER Technology. This presentation is for the purpose of linking up CPTA and OPF global members, to help global OPF member to use CPTA as a stepping stone to go into China market. This presentation will focus on explaining to the global OPF members WHY they should come to China, and above all, HOW to come to China, and WHAT support services CPTA will provide to the global OPF members. It’ll also create a clarity between CPTA and OPF in China, for OPF members to leverage CPTA as a (non-mandatory) on-ramp to China. + +### Speaker + +Zhu Ya Dong (to be confirmed), Chairman of PowerCore, China, Platinum Member of OpenPOWER Foundation + +[Back to Summit Details](2015-summit/) diff --git a/content/blog/cirrascale-joins-openpower-foundation-announces-gpu-accelerated-power8-based-multi-device-development-platform.md b/content/blog/cirrascale-joins-openpower-foundation-announces-gpu-accelerated-power8-based-multi-device-development-platform.md new file mode 100644 index 0000000..bf57be8 --- /dev/null +++ b/content/blog/cirrascale-joins-openpower-foundation-announces-gpu-accelerated-power8-based-multi-device-development-platform.md @@ -0,0 +1,29 @@ +--- +title: "Cirrascale® Joins OpenPOWER™ Foundation, Announces GPU-Accelerated POWER8®-Based Multi-Device Development Platform" +date: "2015-03-20" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +### The Cirrascale RM4950 4U POWER8-based development platform, with Cirrascale SR3514 PCIe switch riser, enables up to four NVIDIA Tesla GPU Accelerators or other compatible PCIe Gen 3.0 devices. + +![gI_90416_RM4950_SideView_PR](images/gI_90416_RM4950_SideView_PR.png)Cirrascale Corporation®, a premier developer of build-to-order, open architecture blade-based and rackmount computing infrastructure, today announced its membership within the OpenPOWER™ Foundation and the release of its RM4950 development platform, based on the IBM® POWER8® 4-core Turismo SCM processor, and designed with NVIDIA® Tesla® GPU accelerators in mind. The new POWER8-based system provides a solution perfectly aligned to support GPU-accelerated big data analytics, deep learning, and scientific high-performance computing (HPC) applications. + +“As Cirrascale dives deeper into supporting more robust installations of GPU-accelerated applications, like those used in big data analytics and deep learning, we’re finding customers rapidly adopting disruptive technologies to advance their high-end server installations,” said David Driggers, CEO, Cirrascale Corporation. “The RM4950 POWER8-based server provides a development platform unique to the marketplace that has the ability to support multiple PCIe devices on a single root complex while enabling true scalable performance of GPU-accelerated applications.” + +The secret sauce of the RM4950 development platform lies with the company’s 80-lane Gen3 PCIe switch-enabled riser, the Cirrascale SR3514. It has been integrated into several recent product releases to create an extended PCIe fabric supporting up to four NVIDIA Tesla GPU accelerators, or other compatible PCIe devices, on a single PCIe root complex. + +“Cirrascale’s new servers enable enterprise and HPC customers to take advantage of GPU acceleration with POWER CPUs,” said Sumit Gupta, general manager of Accelerated Computing at NVIDIA. “The servers support multiple GPUs, which dramatically enhances performance for a range of applications, including data analytics, deep learning and scientific computing.” + +The system is the first of its type for Cirrascale as a new member of the OpenPOWER Foundation. The company joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technologies as well as industry leading open source software aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers. The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform. + +“The Cirrascale RM4950 4U POWER8-based development platform is great example of how new advancements are made possible through open collaboration,” said Ken King, General Manager of OpenPOWER Alliances. “Our OpenPOWER Foundation members are coming together to create meaningful disruptive technologies, providing the marketplace with unique solutions to manage today’s big data needs.” + +The Cirrascale RM4950 development platform is the first of the company’s POWER8-based reference systems with plans for production environment systems being announced later this year. The current development platform is immediately available to order and will be shipping in volume in Q2 2015. Licensing opportunities will also be available immediately to both customers and partners. + +About Cirrascale Corporation Cirrascale Corporation is a premier provider of custom rackmount and blade server solutions developed and engineered for today’s conventional data centers. Cirrascale leverages its patented Vertical Cooling Technology, engineering resources, and intellectual property to provide the industry's most energy-efficient standards-based platforms with the lowest possible total cost of ownership in the densest form factor. Cirrascale sells to large-scale infrastructure operators, hosting and managed services providers, cloud service providers, government, higher education, and HPC users. Cirrascale also licenses its award winning technology to partners globally. To learn more about Cirrascale and its unique data center infrastructure solutions, please visit [http://www.cirrascale.com](http://www.prweb.net/Redirect.aspx?id=aHR0cDovL3d3dy5jaXJyYXNjYWxlLmNvbQ==) or call (888) 942-3800. + +Cirrascale and the Cirrascale logo are trademarks or registered trademarks of Cirrascale Corporation. NVIDIA and Tesla are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. IBM, POWER8, and OpenPOWER are trademarks or registered trademarks of International Business Machines Corporation in the U.S. and other countries. All other names or marks are property of their respective owners. diff --git a/content/blog/clarkson-university-joins-openpower-foundation.md b/content/blog/clarkson-university-joins-openpower-foundation.md new file mode 100644 index 0000000..9283a01 --- /dev/null +++ b/content/blog/clarkson-university-joins-openpower-foundation.md @@ -0,0 +1,37 @@ +--- +title: "Clarkson University Joins OpenPOWER Foundation" +date: "2017-03-21" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +# 3.21.17 + +# Clarkson University Joins OpenPOWER Foundation + +Clarkson University has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture.    + +![IBM POWER8 Processor](images/openpower-300.jpg)POWER CPU denotes a series of high-performance microprocessors designed by IBM. + +Clarkson joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology as well as industry leading open source software aimed at delivering more choice, control and flexibility to developers of next-generation, hyper-scale and cloud data centers. + +The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform. + +With the POWER hardware and software, the researchers at Clarkson, especially the faculty in the Wallace H. Coulter School of Engineering's Department of Electrical & Computer Engineering, will be able to elevate their research in multicore/multithreading architectures, the interaction between system software and micro-architecture, and hardware acceleration techniques based on the POWER microprocessor architecture. The Clarkson faculty intend to join the OpenPOWER Foundation's hardware architecture, system software, and hardware accelerator workgroups. + +"As a member of the OpenPOWER Foundation, we will be able to explore the state-of-the-art hardware and software design used in supercomputer and cloud computing platforms, as well as collaborating with researchers from industry and other institutions," said Assistant Professor of Electrical & Computer Engineering Chen Liu, who is leading the Computer Architecture and Microprocessor Engineering Laboratory at Clarkson. + +"The development model of the OpenPOWER Foundation is one that elicits collaboration and represents a new way in exploiting and innovating around processor technology," says OpenPOWER Foundation President Bryan Talik. "With the Power architecture designed for Big Data and Cloud, new OpenPOWER Foundation members like Clarkson University will be able to add their own innovations on top of the technology to create new applications that capitalize on emerging workloads." + +To learn more about OpenPOWER and view the complete list of current members, visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org/).   + +Clarkson University educates the leaders of the global economy. One in five alumni already leads as an owner, CEO, VP or equivalent senior executive of a company. With its main campus located in Potsdam, N.Y., and additional graduate program and research facilities in the Capital Region and Beacon, New York, Clarkson is a nationally recognized research university with signature areas of academic excellence and research directed toward the world's pressing issues. Through more than 50 rigorous programs of study in engineering, business, arts, education, sciences and the health professions, the entire learning-living community spans boundaries across disciplines, nations and cultures to build powers of observation, challenge the status quo, and connect discovery and innovation with enterprise. + +**Photo caption: IBM POWER8 Processor.** + +**\[A photograph for media use is available at [http://www.clarkson.edu/news/photos/openpower.jpg](http://clarkson.edu/news/photos/openpower.jpg).\]** + +\[News directors and editors: For more information, contact Michael P. Griffin, director of News & Digital Content Services, at 315-268-6716 or [mgriffin@clarkson.edu](mailto:mgriffin@clarkson.edu).\] diff --git a/content/blog/cloud-openpower-nec.md b/content/blog/cloud-openpower-nec.md new file mode 100644 index 0000000..502b841 --- /dev/null +++ b/content/blog/cloud-openpower-nec.md @@ -0,0 +1,57 @@ +--- +title: "Diversify Cloud Computing Services on OpenPOWER with NEC’s Resource Disaggregated Platform for POWER8 and GPUs" +date: "2016-05-24" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Takashi Yoshikawa and Shinji Abe, NEC Corporation_ + +The Resource Disaggregated (RD) Platform expands the use of cloud data centers in not only office applications, but also high performance computing (HPC) with the ability to simultaneously handle multiple demands for data storage, networks, and numerical/graphics processes. The RD platform performs computation by allocating devices from a resource pool at the device level to scale up individual performance and functionality. + +Since the fabric is [ExpEther](http://www.expether.org/index.html), open standard hardware and software can be utilized to build custom computer systems that deliver faster, more powerful, and more reliable computing solutions effectively to meet the growing demand for performance and flexibility. + +## Resource Disaggregated Computing Platform + +The figure shown below is the RD computing platform at Osaka University. In use since 2013, it provides GPU computing power for university students and researchers at Osaka University and other universities throughout Japan. + +![NEC RDCP 1](images/NEC-RDCP-1-1024x469.png) + +The most differentiating point of the system is that the computing resource are custom-configured by attaching necessary devices in the standard PCIe level, meaning you can scale up the performance of a certain function by attaching PCIe standard devices without any modification of software or hardware. + +For example, if you need the processing power of four GPUs for machine learning, you can attach them from the resource pool of GPUs to a single server, and when the job is finished, you can release them back into the pool. With this flexible reconfiguration of the system, you can use a standard 1U server as a GPU host. The resource disaggregated system is a very cost-effective architecture to use GPUs in cloud data centers. + +## [ExpEther Technology](https://openpowerfoundation.org/blogs/nec-acceleration-for-power/) + +![nec rdcp 2](images/nec-rdcp-2-1024x414.png) + +From the software view, Ethernet is transparent. Therefore the combination of the ExpEther engine chip and Ethernet is equivalent to a single hop PCIe standard switch, even if multiple Ethernet switches exist in the network. By adopting this distributed switch architecture, the system can extend the connection distance to a few kilometers and thousands of port counts. And it is still just a standard PCI Express switch, so the customer can reutilize vast assets of PCIe hardware and software without any modification. + +By using ExpEther technology as a fabric for interconnects, a RD computing system can be built not only in rack scale but also multi-rack and data center scale without performance degradation because all the functions are implemented into a single hardware chip. + +## POWER8 Server and ExpEther + +We have made an experimental set up with Tyan POWERR8 Server, Habanero, and ExpEther. The 40G ExpEther HBA is mounted into the POWER8 Server, with a NVIDIA K80 GPU and a SSD in remote locations connected through a standard 40GbE Mellanox switch. + +![nec rdcp 3](images/nec-rdcp-3-1024x619.png) + +We measured the GPU performance by using CUDA N-Body. The figure below shows that we can get comparable performance with ExpEther to the K80 directly inserted in the PCIe slot inside the server. This is because the most of the simulation process has been executed in the GPUs without interaction to the host node and other GPUs. Of course, results may vary depending on the workload. + +![nec rdcp 4](images/nec-rdcp-4-1024x590.png) + +As for the remotely mounted SSD, we saw about 463K IOPS in FIO benchmark testing (Random 4KB Read). This IOPS performance value is almost the same as the local mounted SSD, meaning that there is no performance degradation in the SSD Read. + +![nec rdcp 5](images/nec-rdcp-5.jpg) + +![nec rdcp 6](images/nec-rdcp-6-1024x652.png) + +Conclusion + +- The Resource Disaggregated Platform expands the use of cloud data centers not only office applications but also high performance computing +- The Resource Disaggregated Platform computation by allocating devices from a resource pool at the device level to scale up individual performance and functionality. +- Since the fabric is ExpEther (Distributed PCIe Switch over Ethernet), open standard hardware and software can be utilized to build customer’s computer systems. +- A combination of the latest x8 PCIe Gen3 – dual 40GbE ExpEther and POWER8 server shows potential for intensive computing power. + +To learn more about the ExpEther Consortium, visit them at http://www.expether.org/index.html. To learn more about NECs ExpEther and OpenPOWER, go to https://openpowerfoundation.org/blogs/nec-acceleration-for-power/. diff --git a/content/blog/combining-out-of-band-monitoring-with-ai-and-big-data-for-datacenter-automation-in-openpower.md b/content/blog/combining-out-of-band-monitoring-with-ai-and-big-data-for-datacenter-automation-in-openpower.md new file mode 100644 index 0000000..e5c7af3 --- /dev/null +++ b/content/blog/combining-out-of-band-monitoring-with-ai-and-big-data-for-datacenter-automation-in-openpower.md @@ -0,0 +1,45 @@ +--- +title: "Combining Out-of-Band Monitoring with AI and Big Data for Datacenter Automation in OpenPOWER" +date: "2019-01-24" +categories: + - "blogs" +tags: + - "featured" +--- + +_Featuring OpenPOWER Academic Member: [The University of Bologna](https://www.unibo.it/en)_ + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM + +OpenPOWER hosted its [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/), gathering academic members of the OpenPOWER community. These members were able to share their research and developments. + +One of the presenters was Professor [Andrea Bartolini](https://www.unibo.it/sitoweb/a.bartolini/en) of The University of Bologna. The focus of his presentation was datacenter automation. Bartolini shared how this process can be implemented, examples of applications and future of work within the Power architecture. + +Datacenter automation is an emerging trend that was developed to help with the increased complexity of supercomputers. To get this type of automation, heterogonous sensors are placed in an environment to collect and transmit data, which are then extracted and interpreted using big data and artificial intelligence. These technologies allow for anomaly detections, which can improve the overall learning and performance of datacenters. After information is interpreted, learned feedback is then sent back to sensors which optimizes the device. + +Bartolini identifies a few specific usages that this automation process can be applied to: + +- Verify and clarify node performance +- Detect security hazards +- Predictive maintenance + +Bartolini then focused the rest of his presentation on sharing different applications, including: + +- [D.A.V.I.D.E](https://www.e4company.com/en/?id=press§ion=1&page=&new=davide_supercomputer), a supercomputer designed and developed by E4, was ranked in the [Top500](https://www.top500.org/system/179104). This system is used for measuring, monitoring and collecting data. D.A.V.I.D.E was designed in collaboration with Bartolini and the University of Bologna. +- Out-of-Band Monitoring: monitoring using nodes that allows for real-time frequency analysis on power supply. + +Future works of this emerging practice of automating datacenters include: + +- Extending the approach of in-house security and house-keeping tasks in datacenters +- Leveraging OpenBMC and custom firmware to deploy as part of BMC +- Applying process to larger Power9 systems + +If you’d like to learn more, Bartolini’s full session and slides are below. + +https://www.youtube.com/watch?v=bJ-R7SiFyho + +  + + + +**[Combining out - of - band monitoring with AI and big data for datacenter automation in OpenPOWER](//www.slideshare.net/ganesannarayanasamy/combining-out-of-band-monitoring-with-ai-and-big-data-for-datacenter-automation-in-openpower "Combining out - of - band monitoring with AI and big data for datacenter automation in OpenPOWER")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)** diff --git a/content/blog/continuing-the-datacenter-revolution.md b/content/blog/continuing-the-datacenter-revolution.md new file mode 100644 index 0000000..b697799 --- /dev/null +++ b/content/blog/continuing-the-datacenter-revolution.md @@ -0,0 +1,54 @@ +--- +title: "Continuing the Datacenter Revolution" +date: "2016-01-05" +categories: + - "blogs" +tags: + - "featured" + - "ecosystem" + - "board-members" + - "blogs" +--- + +_By John Zannos and Calista Redmond_ + +![OPF logo](images/OPF-logo.jpg)Dear Fellow Innovators, + +As the newly elected Chair and President of the OpenPOWER Foundation, we would like to take this opportunity to share our vision as we embark on a busy 2016.  Additionally, we want to make sure our fellow members -- all 175 of us and growing -- are aware of the many opportunities we have to contribute to our vibrant and growing organization. + +## Our Vision + +First, the vision.  Through an active group of leading technologists, OpenPOWER in its first two formative years built a strong technical foundation -- developing the literal bedrock of hardware and software building blocks required to enable end users to take advantage of POWER's open architecture.  With several jointly developed OpenPOWER-based servers already in market, a [growing network](http://developers.openpowerfoundation.org/) of physical and cloud-based test servers and a wide range of other [resources and tools](https://openpowerfoundation.org/technical/technical-resources/) now available to developers around the world, we have a strong technical base.  We are now moving into our next phase: scaling the OpenPOWER ecosystem.  How will we do this?  With an unwavering commitment to optimize as many workloads on the POWER architecture as possible. + +It is in this vein that we have identified our top three priorities for 2016: + +1. **Tackle system bottlenecks** through collaboration on memory bandwidth, acceleration, and interconnect advances. +2. **Grow workloads** **and software community** optimizing on OpenPOWER. +3. **Further OpenPOWER’s validation through adoption** conveyed via member and end user testimonials, benchmarking, and industry influencer reports. + +As employees of Canonical and IBM, and active participants in OpenPOWER activities stemming back to the early days, we share a deep commitment to open ecosystems as a driver for meaningful innovation.  Combining Canonical's leadership with growing software applications on the POWER architecture with IBM's base commitment to open development on top of the POWER architecture at all levels of the stack, we stand ready to help lead an even more rapid expansion of the OpenPOWER ecosystem in 2016.  This commitment, however, extends well beyond Canonical and IBM to across the entire [Board leadership](https://openpowerfoundation.org/about-us/board-of-directors/), which continues to reflect the diversity of our membership.  Two of the original founders of OpenPOWER -- our outgoing chair Gordon MacKean of Google and president Brad McCredie with IBM -- will remain close and serve as non-voting Board Advisors, providing guidance on a wide range of technical and strategic activities as needed. To read Gordon MacKean's perspective on OpenPOWER's growth, we encourage you to read his [personal Google+ post](https://plus.google.com/112847999124594649509/posts/PDcmTZzsHDg). + +In driving OpenPOWER’s vision forward, we are fortunate to have at our disposal not just our formal leadership team, but a deep bench of talent throughout the entire organization – you – literally dozens of the world's leading technologists representing all levels of the technology stack across the globe. With your support behind us as, we're sure the odds are stacked in our favor and we can't wait to get started. + +## Get Involved + +So, now that you've heard our vision for 2016, how can you get involved? + +[![OpenPOWER_Summit2016_logo_950](images/OpenPOWER_Summit2016_logo_950.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/10/OpenPOWER_Summit2016_logo_950.jpg) + +- **Make the most out of the 2016 OpenPOWER Summit** – Register to attend, exhibit, submit a poster or present at this year’s North American OpenPOWER Summit in San Jose April 5-7. And, think about what OpenPOWER-related news you can reveal at the show.  We are expecting 200+ press and analysts to attend, so this an opportunity for Members to get some attention.  Be on the lookout for a “Call for News” email soon.  Click [here](https://openpowerfoundation.org/openpower-summit-2016/) to register and get more details.  Specific questions can be directed to the Summit Steering Committee at [opfs2016sg@openpowerfoundation.org](mailto:opfs2016sg@openpowerfoundation.org). +- **Contribute your technical expertise** – Share your technical abilities and drive innovation with fellow technology industry leaders through any of the established [Technical Work Groups](https://openpowerfoundation.org/technical/working-groups/). Contact Technical Steering Committee Chair Jeff Brown at [jeffdb@us.ibm.com](mailto:jeffdb@us.ibm.com) to learn more or to join a work group. +- **Shape market perceptions** – Share your marketing expertise and excitement for the OpenPOWER Foundation by joining the marketing committee. Email the marketing committee at [mktg@openpowerfoundation.org](mailto:mktg@openpowerfoundation.org) to join the committee or learn more. +- **Join the Academic Discussion Group** – Participate in webinars, workshops, contests, and collaboration activities. Email Ganesan Narayanasamy at [ganesana@in.ibm.com](mailto:ganesana@in.ibm.com) to join the group or learn more. +- **Link up with geographic interests** – European member organizer is Amanda Quartly at [mandie\_quartly@uk.ibm.com](mailto:mandie_quartly@uk.ibm.com). The Asia Pacific member organizer is Calista Redmond at [credmond@us.ibm.com](mailto:credmond@us.ibm.com) +- **Tap into technical resources** – Use and build on the technical resources, cloud environments, and loaner systems available. Review what [technical resources and tools](https://openpowerfoundation.org/technical/technical-resources/) are now available and the [growing network](http://developers.openpowerfoundation.org/) of physical and cloud-based test servers available worldwide. +- **Engage OpenPOWER in industry events and forums** – Contact Joni Sterlacci at [j.sterlacci@ieee.org](mailto:j.sterlacci@ieee.org) if you know of an event which may be appropriate for OpenPOWER to have an official presence. +- **Share your stories** – Send your end-user success stories, benchmarks, and product announcements to OpenPOWER marketing committee member Greg Phillips at [gregphillips@us.ibm.com](mailto:gregphillips@us.ibm.com). +- **Write a blog** – Submit a blog to be published on the [OpenPOWER Foundation blog](https://openpowerfoundation.org/newsevents/#category-blogs) detailing how you're innovating with OpenPOWER. Send details to OpenPOWER Foundation blog editor Sam Ponedal at [sponeda@us.ibm.com](mailto:sponeda@us.ibm.com). +- **Join the online discussion** – Follow and join the OpenPOWER social conversations on [Twitter](https://twitter.com/openpowerorg), [Facebook](https://www.facebook.com/openpower), [LinkedIn](https://www.linkedin.com/groups/7460635) and [Google+](https://plus.google.com/117658335406766324024/posts). + +And, finally, please do not hesitate to reach out to either of us personally to discuss anything OpenPOWER-related at any time.  Seriously.  We’d love to hear from you! + +Yours in collaboration, + +John Zannos                                                     Calista Redmond OpenPOWER Chair                                           OpenPOWER President [john.zannos@canonical.com](mailto:john.zannos@canonical.com)                              [credmond@us.ibm.com](mailto:credmond@us.ibm.com) diff --git a/content/blog/creativec-vasp-power.md b/content/blog/creativec-vasp-power.md new file mode 100644 index 0000000..bf41f33 --- /dev/null +++ b/content/blog/creativec-vasp-power.md @@ -0,0 +1,32 @@ +--- +title: "CreativeC Optimizes VASP on Power for Alloy Design" +date: "2018-11-29" +categories: + - "blogs" +tags: + - "featured" +--- + +\[caption id="attachment\_5954" align="alignleft" width="188"\][![](images/Greg_S_headshot.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/Greg_S_headshot.jpg) Greg Scantlen, CEO, CreativeC\[/caption\] + +[The Vienna Ab initio Simulation Package](https://www.vasp.at/index.php/about-vasp/59-about-vasp) – also known as VASP – is a popular and powerful HPC application. It is one of the popular tools in atomistic materials modeling, such as electronic structure calculations and quantum-mechanical molecular dynamics. + +It is developed at the University of Vienna in Austria for close to thirty years and contains roughly half-a-million lines of code. Currently, it’s used by more than 1,400 research groups in academia and industry worldwide and consistently ranks among the top 10 applications on national supercomputers. + +But despite its significant impact on technology, there is one fundamental problem with VASP and similar programs: it does not scale very well. So instead of accelerating workloads, naively running VASP on more nodes can have the opposite effect. In fact, we observed that VASP actually runs _slower_ when operating on more than eight traditional nodes. + +Since VASP doesn’t scale well on traditional clusters, it’s a perfect fit for the OpenPOWER architecture. Because OpenPOWER has the highest compute density available in a single node, we applied for and received funding from a grant to run VASP to run quantum chemistry simulations on OpenPOWER. + +Now, we’re running just as well or just a bit faster on a single OpenPOWER node as we were previously on eight x86 Linux-based compute nodes. More importantly, in the early phase of this project, we don’t have to compete with rigid time limits and full queues of shared computing facilities. Instead of artificially adding break points and chopping the project into smaller parcels, we can explore larger model sizes and focus on the science. + +The result is a more efficient use of computing resources – reduced waiting time and an accelerated timeline for innovative, ground-breaking research. + +One project we are pursuing with VASP seeks to improve hip and knee implants. Often, titanium alloy implants used in hip and knee implants are much stronger than bone, sometimes causing bone atrophy following an implant procedure. Our goal is to use VASP on OpenPOWER to identify an alloy that has properties more compatible to bone than traditional titanium alloy. + +Improved hip and knee implants are only one advancement that could be made from running VASP on an OpenPOWER system – and there are certainly others! + +**[![](images/CreativeC-LOGO-300dpi-RGB-page-001-300x262.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/CreativeC-LOGO-300dpi-RGB-page-001.jpg)About CreativeC** + +CreativeC’s mission is to facilitate Science and Engineering by computing faster. CreativeC’s discipline is work codesigned High Performance Computing (HPC). We team with expert software developers to offer specialized Instruments for Science and Engineering in the disciplines of Materials Science, Computational Chemistry, Molecular Dynamics, Deep Learning, Neural Networks, Drug Discovery, Biotechnology, and Bioinformatics. Our business model calls for diversification into areas of Science and Engineering made commercially viable by new compute technologies. + +[http://creativecllc.com/](http://creativecllc.com/) diff --git a/content/blog/crossing-the-performance-chasm-with-openpower.md b/content/blog/crossing-the-performance-chasm-with-openpower.md new file mode 100644 index 0000000..68290e2 --- /dev/null +++ b/content/blog/crossing-the-performance-chasm-with-openpower.md @@ -0,0 +1,32 @@ +--- +title: "Crossing the Performance Chasm with OpenPOWER" +date: "2015-02-25" +categories: + - "blogs" +--- + +### Executive Summary + +The increasing use of smart phones, sensors and social media is a reality across many industries today. It is not just where and how business is conducted that is changing, but the speed and scope of the business decision-making process is also transforming because of several emerging technologies – Cloud, High Performance Computing (HPC), Analytics, Social and Mobile (CHASM). + +High Performance Data Analytics (HPDA) is the fastest growing segment within HPC. Businesses are investing in HPDA to improve customer experience and loyalty, discover new revenue opportunities, detect fraud and breaches, optimize oil and gas exploration and production, improve patient outcomes, mitigate financial risks, and more. Likewise, HPDA helps governments respond faster to emergencies, analyze terrorist threats better and more accurately predict the weather – all of which are vital for national security, public safety and the environment. The economic and social value of HPDA is immense. + +But the sheer volume, velocity and variety of data is an obstacle to cross the Performance Chasm in almost every industry.  To meet this challenge, organizations must deploy a cost-effective, high-performance, reliable and agile IT infrastructure to deliver the best possible business outcomes. This is the goal of IBM’s data-centric design of Power Systems and the OpenPOWER Foundation. + +A key underlying belief driving the OpenPOWER Foundation is that focusing solely on microprocessors is insufficient to help organizations cross this Performance Chasm. System stack (processors, memory, storage, networking, file systems, systems management, application development environments, accelerators, workload optimization, etc.) innovations are required to improve performance and cost/performance. IBM’s data-centric design minimizes data motion, enables compute capabilities across the system stack, provides a modular, scalable architecture and is optimized for HPDA. + +Real world examples of innovations and performance enhancements resulting from IBM’s data-centric design of Power Systems and the OpenPOWER Foundation are discussed. These span financial services, life sciences, oil and gas and other HPDA workloads. These examples highlight the urgent need for clients (and the industry) to evaluate HPC systems performance at the solution/workflow level rather than just on narrow synthetic point benchmarks such as LINPACK that have long dominated the industry’s discussion. + +Clients who invest in IBM Power Systems for HPC could lower the total cost of ownership (TCO) with fewer more reliable servers compared to x86 alternatives.  More importantly, these customers will also be able to cross the Performance Chasm leveraging high-value offerings delivered by the OpenPOWER Foundation for many real life HPC workloads. + +### Speaker + +_Sponsored by IBM_ **Srini Chari, Ph.D., MBA** [**chari@cabotpartners.com**](mailto:chari@cabotpartners.com) + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Chari-Srini_OPFS2015_IBMCabotPartners_031315_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/data-center-and-cloud-computing-market-landscape-and-challenges.md b/content/blog/data-center-and-cloud-computing-market-landscape-and-challenges.md new file mode 100644 index 0000000..1453f50 --- /dev/null +++ b/content/blog/data-center-and-cloud-computing-market-landscape-and-challenges.md @@ -0,0 +1,26 @@ +--- +title: "Data center and Cloud computing market landscape and challenges" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Presentation Objective + +In this talk, we will gain an understanding of Data center and Cloud computing market landscape and challenges, discuss technology challenges that limit scaling of cloud computing that is growing at an exponential pace and wrap up with insights into how FPGAs combined with general purpose processors are transforming next generation data centers with tremendous compute horsepower, low-latency and extreme power efficiency. + +### Abstract + +Data center workloads demand high computational capabilities, flexibility, power efficiency, and low cost. In the computing hierarchy, general purpose CPUs excel at Von Neumann (serial) processing, GPUs perform well on highly regular SIMD processing, whereas inherently parallel FPGAs excel on specialized workloads. Examples of specialized workloads: compute and network acceleration, video and data analytics, financial trading, storage, database and security.  High level programming languages such as OpenCL have created a common development environment for CPUs, GPUs and FPGAs. This has led to adoption of hybrid architectures and a Heterogeneous World. This talk showcases FPGA-based acceleration examples with CAPI attach through OpenPOWER collaboration and highlights performance, power and latency benefits. + +### Speaker Bio + +Manoj Roge is Director of Wired & Data Center Solutions Planning at Xilinx. Manoj is responsible for product/roadmap strategy and driving technology collaborations with partners. Manoj has spent 21 years in semiconductor industry with past 10 years in FPGA industry. He has been in various engineering and marketing/business development roles with increasing responsibilities. Manoj has been instrumental in driving broad innovative solutions through his participation in multiple standards bodies and consortiums. He holds an MBA from Santa Clara University, MSEE from University of Texas, Arlington and BSEE from University of Bombay. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/RogeManoj_OPFS2015_Xilinx_031815.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/data-centric-interactive-visualization-of-very-large-data.md b/content/blog/data-centric-interactive-visualization-of-very-large-data.md new file mode 100644 index 0000000..ee6b52f --- /dev/null +++ b/content/blog/data-centric-interactive-visualization-of-very-large-data.md @@ -0,0 +1,28 @@ +--- +title: "Data Centric Interactive Visualization of Very Large Data" +date: "2015-01-19" +categories: + - "blogs" +--- + +Speakers:  Bruce D’Amora and Gordon Fossum Organization: IBM T.J. Watson Research, Data Centric Systems Group + +### Abstract + +The traditional workflow for high-performance computing simulation and analytics is to prepare the input data set, run a simulation, and visualize the results as a post-processing step. This process generally requires multiple computer systems designed for accelerating simulation and visualization. In the medical imaging and seismic domains, the data to be visualized typically comprise uniform three-dimensional arrays that can approach tens of petabytes. Transferring this data from one system to another can be daunting and in some cases may violate privacy, security, and export constraints.  Visually exploring these very large data sets consumes significant system resources and time that can be conserved if the simulation and visualization can reside on the same system to avoid time-consuming data transfer between systems. End-to-end workflow time can be reduced if the simulation and visualization can be performed simultaneously with a fast and efficient transfer of simulation output to visualization input. + +Data centric visualization provides a platform architecture where the same high-performance server system can execute simulation, analytics and visualization.  We present a visualization framework for interactively exploring very large data sets using both isoparametric point extraction and direct volume-rendering techniques.  Our design and implementation leverages high performance IBM Power servers enabled with  NVIDIA GPU accelerators and flash-based high bandwidth low-latency memory. GPUs can accelerate generation and compression of two-dimensional images that can be transferred across a network to a range of devices including large display walls, workstation/PC, and smart devices. Users are able to remotely steer visualization, simulation, and analytics applications from a range of end-user devices including common smart devices such as phones and tablets. In this presentation, we discuss and demonstrate an early implementation and additional challenges for future work. + +### Speaker Bios + +**Bruce D’Amora**, _IBM Research Division, Thomas J. Watson Research Center, P.O Box 218, Yorktown Heights, New York 10598 (_[_damora@us.ibm.com_](mailto:damora@us.ibm.com)_)_ . Mr. D’Amora is a Senior Technical Staff Member in the Computational Sciences department in Data-centric Computing group.  He is currently focusing on frameworks to enable computational steering and visualization for high performance computing applications.  Previously, Mr. D’Amora was the chief architect of Cell Broadband Engine-based platforms to accelerate applications used for creating digital animation and visual effects. He has been a lead developer on many projects ranging from applications to microprocessors and holds a number of hardware and software patents. He joined IBM Research in 2000 after serving as the Chief Software Architect for the IBM Graphics development group in Austin, Texas where he led the OpenGL development effort from 1991 to 2000. He holds Bachelor’s degrees in Microbiology and Applied Mathematics from the University of Colorado. He also holds a Masters degree in Computer Science from National Technological University. + +**Gordon C. Fossum** _IBM Research Division, Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, New York 10598 (_[_fossum@us.ibm.com_](mailto:fossum@us.ibm.com)_)._  Mr. Fossum is an Advisory Engineer in Computational Sciences at the Thomas J. Watson Research Center. He received a B.S. degree in Mathematics and Computer Science from the University of Illinois in 1978, an M.S. in Computer Science from the University of California, Berkeley in 1981, and attained "all but dissertation" status from the University of Texas in 1987.  He subsequently joined IBM Austin, where he has worked on computer graphics hardware development, Cell Broadband Engine development, and OpenCL development. He is an author or coauthor of 34 patents, has received a "high value patent" award from IBM and was named an IBM Master Inventor in 2005. In January 2014, he transferred into IBM Research, to help enable visualization of “big data” in a data-centric computing environment. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/D’Amora-Bruce_OPFS2015_IBM_031015_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/db2-blu-wgpu-demo-concurrent-execution-of-an-analytical-workload-on-a-power8-server-with-k40-gpus.md b/content/blog/db2-blu-wgpu-demo-concurrent-execution-of-an-analytical-workload-on-a-power8-server-with-k40-gpus.md new file mode 100644 index 0000000..904107e --- /dev/null +++ b/content/blog/db2-blu-wgpu-demo-concurrent-execution-of-an-analytical-workload-on-a-power8-server-with-k40-gpus.md @@ -0,0 +1,24 @@ +--- +title: "DB2 BLU w/GPU Demo - Concurrent execution of an analytical workload on a POWER8 server with K40 GPUs" +date: "2015-02-25" +categories: + - "blogs" +--- + +### Abstract + +In this technology preview demonstration, we will show the concurrent execution of an analytical workload on a POWER8 server with K40 GPUs. DB2 will detect both the presence of GPU cards in the server and the opportunity in queries to shift the processing of certain core operations to the GPU.  The required data will be copied into the GPU memory, the operation performed and the results sent back to the P8 processor for any remaining processing. The objective is to 1) reduce the elapsed time for the operation and 2) Make more CPU available to other SQL processing and increase overall system throughput by moving intensive CPU processing tasks to GPU + +### Speaker names / Titles + +Sina Meraji, PhD Hardware Acceleration Laboratory, SWG [Sinamera@ca.ibm.com](mailto:Sinamera@ca.ibm.com) + +Berni Schiefer Technical Executive (aka DE) , Information Management Performance and Benchmarks DB2, BigInsights / Big SQL, BlueMix SQLDB / Analytics Warehouse  and Optim Data Studio [schiefer@ca.ibm.com](mailto:schiefer@ca.ibm.com) + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Meraji_OPFS2015_IBM_031715.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/deep-learning-goes-to-the-dogs.md b/content/blog/deep-learning-goes-to-the-dogs.md new file mode 100644 index 0000000..8fa17b8 --- /dev/null +++ b/content/blog/deep-learning-goes-to-the-dogs.md @@ -0,0 +1,42 @@ +--- +title: "Deep Learning Goes to the Dogs" +date: "2016-11-10" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Indrajit Poddar, Yu Bo Li, Qing Wang, Jun Song Wang, IBM_ + +These days you can see machine and deep learning applications in so many places. Get driven by a [driverless car](http://www.bloomberg.com/news/features/2016-08-18/uber-s-first-self-driving-fleet-arrives-in-pittsburgh-this-month-is06r7on). Check if your email is really conveying your sense of joy with the [IBM Watson Tone Analyzer](https://tone-analyzer-demo.mybluemix.net/), and [see IBM Watson beat the best Jeopardy player](https://www.youtube.com/watch?v=P0Obm0DBvwI) in the world in speed and accuracy. Facebook is even using image recognition tools to suggest tagging people in your photos; it knows who they are! + +## Barking Up the Right Tree with the IBM S822LC for HPC + +We wanted to see what it would take to get started building our very own deep learning application and host it in a cloud. We used the open source deep learning framework, [Caffe](http://caffe.berkeleyvision.org/), and example classification Jupyter notebooks from GitHub, like [classifying with ImageNet](http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb). We found several published trained models, e.g. GoogLeNet from the [Caffe model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo). For a problem, we decided to use dog breed classification. That is, given a picture of a dog, can we automatically identify the breed? This is actually a [class project](http://cs231n.stanford.edu/) from Stanford University with student reports, such as [this one](http://cs231n.stanford.edu/reports/fcdh_FinalReport.pdf) from David Hsu. + +We started from the [GoogLeNet model](https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet) and created our own model trained on the [Stanford Dogs Dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) using a system similar to the [IBM S822LC for HPC systems with NVIDIA Tesla P100 GPUs](https://blogs.nvidia.com/blog/2016/09/08/ibm-servers-nvlink/) connected to the CPU with NVIDIA NVLink. As David remarked in his report, without GPUs, it takes a very long time to train a deep learning model on even a small-sized dataset. + +Using a previous generation IBM S822LC OpenPOWER system with a NVIDIA Tesla K80 GPU, we were able to train our model in only a few hours. The [IBM S822LC for HPC systems](http://www-03.ibm.com/systems/power/hardware/s822lc-hpc/) not only features the most powerful NVIDIA Tesla P100 GPUs, but also two IBM POWER8 processors interconnected with powerful [NVIDIA NVLink adapters](https://en.wikipedia.org/wiki/NVLink). These systems make data transfers between main memory and GPUs significantly faster compared to systems with PCIe interconnects. + +## Doggy Docker for Deep Learning + +We put [our Caffe model and our classification code](https://github.com/Junsong-Wang/pet-breed) written in Python into a web application inside a Docker container and deployed it with Apache Mesos and Marathon. Apache Mesos is an open source cluster management application with fine-grained resource scheduling features which now recognize [GPUs](http://www.nvidia.com/object/apache-mesos.html) as cluster-wide resources. + +In addition to Apache Mesos, it is possible to run cluster managers, such as Kubernetes, Spectrum Conductor for Containers, and Docker GPU management components, such as [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) on OpenPOWER systems (see [presentation](http://www.slideshare.net/IndrajitPoddar/enabling-cognitive-workloads-on-the-cloud-gpus-with-mesos-docker-and-marathon-on-power)). In addition to Caffe, it is possible to run other [popular deep learning frameworks and tools](https://openpowerfoundation.org/blogs/deep-learning-options-on-openpower/) such as Torch, Theano, DIGITS and [TensorFlow](https://www.ibm.com/developerworks/community/blogs/fe313521-2e95-46f2-817d-44a4f27eba32/entry/Building_TensorFlow_on_OpenPOWER_Linux_Systems?lang=en) on OpenPOWER systems. + +This [lab tutorial](http://www.slideshare.net/IndrajitPoddar/fast-scalable-easy-machine-learning-with-openpower-gpus-and-docker) walks through some simple sample use cases. In addition, some cool examples can be seen from the results of the recently concluded [OpenPOWER Developer Challenge](https://openpowerfoundation.org/blogs/openpower-developer-challenge-finalists/). + +## This Dog Will Hunt + +Our little GPU-accelerated pet breed classification micro-service is running in a Docker container and can be accessed at this [link](http://129.33.248.88:31001/) from a mobile device or laptop. See for yourself! + +For example, given this image link from a Google search for "dog images", [https://www.petpremium.com/pets-pics/dog/german-shepherd.jpg](https://www.petpremium.com/pets-pics/dog/german-shepherd.jpg), we got this correct classification in 0.118 secs: + +![German Shepard Deep Learning Dogs](images/dl-dogs-1.png) + +You can also spin up your own GPU Docker container with deep learning libraries (e.g. Caffe) in the [NIMBIX cloud](https://platform.jarvice.com/landing) and train your own model and develop your own accelerated classification example. + +![dl-dogs-2](images/dl-dogs-2.png) + +Give it a try and share your screenshots in the comments section below! diff --git a/content/blog/deep-learning-options-on-openpower.md b/content/blog/deep-learning-options-on-openpower.md new file mode 100644 index 0000000..452f250 --- /dev/null +++ b/content/blog/deep-learning-options-on-openpower.md @@ -0,0 +1,39 @@ +--- +title: "Deep Learning Options on OpenPOWER Expand with New Distributions" +date: "2016-09-14" +categories: + - "blogs" +tags: + - "featured" + - "deep-learning" + - "machine-learning" + - "cognitive" +--- + +_By Michael Gschwind, Chief Engineer, Machine Learning and Deep Learning, IBM Systems_ + +![open key new 5](images/open-key-new-5.jpg) + +I am pleased to announce a major update to the deep learning frameworks available for OpenPOWER as software “distros” (distributions) that are as easily installable as ever using the Ubuntu system installer. + +## Significant updates to Key Deep Learning Frameworks on OpenPOWER + +Building on the great response to our first release of the Deep Learning Frameworks, we have made significant updates by refreshing all the available frameworks now available on OpenPOWER as pre-built binaries optimized for GPU acceleration: + +- [**Caffe**](http://caffe.berkeleyvision.org/), a dedicated artificial neural network (ANN) training environment developed by the Berkeley Vision and Learning Center at the University of California at Berkeley is now available in two versions: the leading edge Caffe development version from UCB’s BVLC, and a Caffe version tuned Nvidia to offer even more scalability using GPUs. +- [**Torch**](http://torch.ch/), a framework consisting of several ANN modules built on an extensible mathematics library +- [**Theano**](http://deeplearning.net/software/theano/), another framework consisting of several ANN modules built on an extensible mathematics library + +The updated Deep Learning software distribution also includes [**DIGITS**](http://deeplearning.net/software/theano/)**,** a graphical user interface to make users immediately productive at using the Caffe and Torch deep learning frameworks. + +As always, we’ve ensured that these environments may be built from the source repository for those who prefer to compile their own binaries. + +## New Distribution, New Levels of Performance + +The new distribution includes major performance enhancements in all key  areas: + +- **The** **OpenBLAS** linear algebra library includes enhancement to take full advantage of the [POWER vector-scalar instruction set](https://www.researchgate.net/publication/299472451_Workload_acceleration_with_the_IBM_POWER_vector-scalar_architecture) offering a manifold speedup to processing on POWER CPUs. +- **The Mathematical Acceleration Subsystem (****MASS****) for Linux** high-performance mathematical libraries are made available in freely distributable form and free of charge to accelerate cognitive and other Linux applications by exploiting the latest advances in mathematical algorithm optimization and advanced POWER processor features and in particular the [POWER vector-scalar instruction set](https://www.researchgate.net/publication/299472451_Workload_acceleration_with_the_IBM_POWER_vector-scalar_architecture). +- **cuDNN** v5.1 enables Linux on Power cognitive applications to take full advantage of the latest GPU processing features and the newest GPU accelerators. + +## [To get started with or upgrade to the latest version of the MLDL frameworks, download the installation instructions](http://ibm.co/1YpWn5h). diff --git a/content/blog/department-of-energy-awards-425-million-for-next-generation-supercomputing-technologies.md b/content/blog/department-of-energy-awards-425-million-for-next-generation-supercomputing-technologies.md new file mode 100644 index 0000000..d5ee953 --- /dev/null +++ b/content/blog/department-of-energy-awards-425-million-for-next-generation-supercomputing-technologies.md @@ -0,0 +1,25 @@ +--- +title: "Department of Energy Awards $425 Million for Next Generation Supercomputing Technologies" +date: "2014-11-20" +categories: + - "press-releases" + - "blogs" +tags: + - "department-of-energy" + - "coral" + - "supercomputer" +--- + +WASHINGTON — U.S. Secretary of Energy Ernest Moniz today announced two new High Performance Computing (HPC) awards to put the nation on a fast-track to next generation exascale computing, which will help to advance U.S. leadership in scientific research and promote America’s economic and national security. + +Secretary Moniz announced $325 million to build two state-of-the-art supercomputers at the Department of Energy’s Oak Ridge and Lawrence Livermore National Laboratories.  The joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) was established in early 2014 to leverage supercomputing investments, streamline procurement processes and reduce costs to develop supercomputers that will be five to seven times more powerful when fully deployed than today’s fastest systems in the U.S. In addition, Secretary Moniz also announced approximately $100 million to further develop extreme scale supercomputing technologies as part of a research and development program titled FastForward 2. + +“High-performance computing is an essential component of the science and technology portfolio required to maintain U.S. competitiveness and ensure our economic and national security,” Secretary Moniz said. “DOE and its National Labs have always been at the forefront of HPC and we expect that critical supercomputing investments like CORAL and FastForward 2 will again lead to transformational advancements in basic science, national defense, environmental and energy research that rely on simulations of complex physical systems and analysis of massive amounts of data.” + +Both CORAL awards leverage the IBM Power Architecture, NVIDIA’s Volta GPU and Mellanox’s Interconnected technologies to advance key research initiatives for national nuclear deterrence, technology advancement and scientific discovery. Oak Ridge National Laboratory’s (ORNL’s) new system, Summit, is expected to provide at least five times the performance of ORNL’s current leadership system, Titan. Lawrence Livermore National Laboratory’s (LLNL’s) new supercomputer, Sierra, is expected to be at least seven times more powerful than LLNL’s current machine, Sequoia. Argonne National Laboratory will announce its CORAL award at a later time. + +The second announcement today, FastForward 2, seeks to develop critical technologies needed to deliver next-generation capabilities that will enable affordable and energy-efficient advanced extreme scale computing research and development for the next decade.  The joint project between DOE Office of Science and National Nuclear Security Administration (NNSA) will be led by computing industry leaders AMD, Cray, IBM, Intel and NVIDIA. + +In an era of increasing global competition in high-performance computing, advancing the Department of Energy’s computing capabilities is key to sustaining the innovation edge in science and technology that underpins U.S. national and economic security while driving down the energy and costs of computing. The overall goal of both CORAL and FastForward 2 is to establish the foundation for the development of exascale computing systems that would be 20-40 times faster than today’s leading supercomputers. + +For more information on CORAL, please click on the following fact sheet [HERE](http://www.energy.gov/downloads/fact-sheet-collaboration-oak-ridge-argonne-and-livermore-coral). diff --git a/content/blog/deploying-power8-virtual-machines-in-ovh-public-cloud.md b/content/blog/deploying-power8-virtual-machines-in-ovh-public-cloud.md new file mode 100644 index 0000000..451fd25 --- /dev/null +++ b/content/blog/deploying-power8-virtual-machines-in-ovh-public-cloud.md @@ -0,0 +1,89 @@ +--- +title: "Deploying POWER8 Virtual Machines in OVH Public Cloud" +date: "2015-02-24" +categories: + - "blogs" +--- + +_By Carol B. Hernandez, Sr. Technical Staff Member, Power Systems Design_ + +Deploying POWER8 virtual machines for your projects is straightforward and fast using OVH POWER8 cloud services. POWER8 virtual machines are available in two flavors in OVH’s RunAbove cloud: [http://labs.runabove.com/power8/](http://labs.runabove.com/power8/). + +[![image1](images/image1-300x272.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image1.png) ![image2](images/image2-300x300.png) + +  + +POWER8 compute is offered in RunAbove as a “Lab”. [Labs](http://labs.runabove.com/index.xml) provide access to the latest technologies in the cloud and are not subject to Service Level Agreements (SLA). I signed up for the POWER8 lab and decided to share my experience and findings. + +To get started, you have to open a RunAbove account and sign up for the POWER8 Lab at: [https://cloud.runabove.com/signup/?launch=power8](https://cloud.runabove.com/signup/?launch=power8). + +When you open a RunAbove account you link the account to a form of payment, credit card or pay pal account. I had trouble using the credit card path but was able to link to a pay pal account successfully. + +After successfully signing up for the POWER8 lab, you are taken to the RunAbove home page which defaults to “Add an Instance”. + +[](https://openpowerfoundation.org/wp-content/uploads/2015/02/6.png) + +[![image4](images/image4.jpeg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image4.jpeg) + +The process to create a POWER8 instance (aka virtual machine) is straightforward. You select the data center (North America BHS-1), the “instance flavor” (Power 8S), and the instance image (Ubuntu 14.04). + +[![image5](images/image5.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image5.png) + +Then, you select the ssh key to access the virtual machine. The first time I created an instance, I had to add my ssh key. After that, I just had to select among the available ssh keys. + +The last step is to enter the instance name and you are ready to “fire up”. The IBM POWER8 S flavor gives you a POWER8 virtual machine with 8 virtual processors, 4 GB of RAM, and 10 GB of object storage. The virtual machine is connected to the default external network. The Ubuntu 14.04 image is preloaded in the virtual machine. + +After a couple of minutes, you get the IP address and can ssh to your POWER8 virtual machine. + +[![image6](images/image6.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image6.jpg) [![image13](images/image13.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image13.png) + +  + +You can log in to your POWER8 virtual machine and upgrade the Linux image to the latest release available, using the appropriate Linux distribution commands. I was able to successfully upgrade to Ubuntu 14.10. + +The default RunAbove interface (Simple Mode) provides access to a limited set of tasks, e.g. add and remove instances, SSH keys, and object storage. The OpenStack Horizon interface, accessed through the drop down menu under the user name, provides access to an extended set of tasks and options. + +[![image8](images/image8.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image8.png) + +Some of the capabilities available through the OpenStack Horizon interface are: + +**Create snapshots.** This function is very helpful to capture custom images that can be used later on to create other virtual machines. I created a snapshot of the POWER8 virtual machine after upgrading the Linux image to Ubuntu 14.10. + +[![image9](images/image9.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image9.png) + +**Manage project images.** You can add images to your project by creating snapshots of your virtual machines or importing an image using the Create Image task. The figure below shows a couple of snapshots of POWER8 virtual machines after the images were customized by upgrading to Ubuntu 10.14 or adding various packages for development purposes. + +[![image10](images/image10.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image10.png) + +**Add private network connections.** You can create a local network and connect your virtual machines to it when you create an instance. + +[![image11](images/image11.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image11.png) + +**Create instance from snapshot.** The launch instance task, provided in the OpenStack Horizon interface, allows you to create a virtual machine using a snapshot from the project image library. In this example, the snapshot of a virtual machine that was upgraded to Ubuntu 14.10 was selected. + +[![image12](images/image12.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image12.png) + +[![image7](images/image7.jpeg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image7.jpeg) + +**Customize instance configuration.** The launch instance task also allows you to add the virtual machine to a private network and specify post-deployment customization scripts, e.g. OpenStack user-data. + +[![image14](images/image14.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image14.jpg) + +All of these capabilities are also available through OpenStack APIs. The figure below lists all the supported OpenStack services. + +[![image15](images/image15.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image15.png) + +Billing is based on created instances. The hourly rate ($0.05/hr) is charged even if the instance is inactive or you never log in to instance. There is also a small charge for storing custom images or snapshots. + +To summarize, you can quickly provision a POWER8 environment to meet your project needs using OVH RunAbove interfaces as follows: + +- Use “Add Instance” to create a POWER8 virtual machine. Customize the Linux image with the desired development environment / packages or workloads + - Upgrade to desired OS level + - Install any applications, packages or files needed to support your project +- Create a snapshot of the POWER8 virtual machine with custom image +- Use “Launch Instance” to create a POWER8 virtual machine using the snapshot of your custom image + - For quick and consistent deployment of desired environment on multiple POWER8 virtual machines +- Delete and re-deploy POWER8 virtual machines as needed to meet your project demands +- Use OpenStack APIs to automate deployment of POWER8 Virtual Machines + +For more information about the OVH POWER8 cloud services and to sign up for the POWER8 lab visit: [http://labs.runabove.com/power8/](http://labs.runabove.com/power8/). diff --git a/content/blog/developer-adoption-summit-europe.md b/content/blog/developer-adoption-summit-europe.md new file mode 100644 index 0000000..fc065da --- /dev/null +++ b/content/blog/developer-adoption-summit-europe.md @@ -0,0 +1,37 @@ +--- +title: "OpenPOWER Foundation Accelerates Developer Adoption at OpenPOWER Summit Europe" +date: "2018-10-03" +categories: + - "blogs" +tags: + - "featured" +--- + +[![](images/Summit-Europe-Banner-e1538569223974.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/10/Summit-Europe-Banner-e1538569223974-1.jpg)More than 250 industry leaders and OpenPOWER Foundation members registered and are convening today at the OpenPOWER Summit Europe 2018 in Amsterdam. The two-day, developer-centric event themed “Open the Future” includes sessions on technologies like PCIe Gen4, CAPI, OpenCAPI, Linux, FPGA, Power Architecture and more. + +Front and center at OpenPOWER Summit Europe is the Talos II developer workstation by Raptor Computing Systems. The first POWER9 developer workstation, the Talos II will enable more developers to begin working on Power technology due to its affordable price point. + +Artem Ikoev, c-founder and CTO of Yadro, one of the OpenPOWER Foundation’s newest Platinum members, will also speak at OpenPOWER Summit Europe. According to Ikoev, “The openness of the OpenPOWER Foundation enables collaboration among industry leaders as well as emerging vendors, resulting in pioneering products.” + +“European interest in OpenPOWER has grown consistently and now comprises close to 25 percent of our membership,” said Hugh Blemings, executive director, OpenPOWER Foundation. “Computing infrastructure, artificial intelligence, security and analytics are all areas where our European members are bringing innovative solutions to the forefront.” + +## OpenPOWER Summit Europe Hackathons + +OpenPOWER Summit Europe attendees will have a chance to participate in two hands-on hackathons. + +The OpenBMC hackathon will provide participants with a complete understanding of the fundamentals of OpenBMC including development, build environment and service management. Planned exercises will cover kernel updates, initial application development, web user interface customization and support system integration. + +The AI4Good hackathon empowers participants to use their coding skills to help others. Teams will compete to build predictive machine learning and deep learning models to help detect the risk of lung tumors. + +## OpenPOWER Growth in Europe + +Representatives from a number of OpenPOWER Foundation member organizations attended Summit Europe 2018 to share how they’re using Power to Open the Future. Highlights include: + +- To assist with the monumental task of collecting data generated by their large hadron collider (LHC), **CERN** is evaluating POWER9-based OpenPOWER systems to capture the 5 terabytes of data generated each second by the LHC. POWER9’s industry-leading IO features can help drive differentiated performance for the FPGA cards that CERN uses to capture the data. +- Based on blockchain technology and decentralized networks with democratic oversight, **Vereign** adds integrity, authenticity and privacy to identity, data and collaboration. “Such federated networks of user-controlled clouds require performance, transparency and the ability to add strong hardware-based cryptography. OpenPOWER is the only platform that gives us all three in combination with a vibrant ecosystem of further innovation to further improve our solution, said Georg Greve, co-founder and president, Vereign AG. +- **Brytlyt** works with companies to solve the challenge of analyzing billions of rows of data at “the speed of thought” by indexing, joining and aggregating data with its GPU database. +- Leveraging OpenPOWER enabled **E4** to build and integrate a chain of components that enable its [A.V.I.D.E supercomputer](https://www.e4company.com/en/?id=press§ion=1&page=&new=davide_supercomputer) to achieve increased energy efficiency. +- **Inspur Power Systems** strives to build a new generation of OpenPOWER server products for data center servers facing the “cloud intelligence” era. The company has released three OpenPOWER servers this year including its Enterprise General Platform, Commercial Computing Platform and Facing HPC and AI Platform. +- **Delft University** is working to create next generation OpenPOWER computing systems to achieve the best performance for the target application. In collaboration with IBM, the organization is working to accelerate DNA analysis on FPGAs using CAPI with a goal of creating an end-to-end DNA analysis solution that is easily scalable and delivers high speed. + +As organizations collaborate on new solutions and more developers begin to build on Power, the OpenPOWER Foundation expects continued growth in Europe and around the world. For real time updates from the event, check out [#OpenPOWERSummit](https://twitter.com/hashtag/OpenPOWERSummit?src=hash) on Twitter. diff --git a/content/blog/drc-fpga-interconnect.md b/content/blog/drc-fpga-interconnect.md new file mode 100644 index 0000000..f8062d7 --- /dev/null +++ b/content/blog/drc-fpga-interconnect.md @@ -0,0 +1,30 @@ +--- +title: "New OpenPOWER Member DRC Computing Discusses FPGAs at IBM Interconnect" +date: "2016-02-22" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Roy Graham, President and COO, DRC Computer Corp._ + +New business models bring new opportunities, and my relationship with IBM is proof-positive of that fact. Although I respected them, in the previous way of doing business they were the competition, and it was us or them. Wow, has that changed! In the last year working with IBM I see a very new company and the OpenPOWER organization as a real embodiment of a company wanting to partner and foster complementary technologies. + +![DRC](images/DRC.png) + +DRC Computer (DRC) builds highly accelerated, low latency applications using FPGAs (Field Programmable Gate Arrays). These chips offer massive parallelism at very low power consumption. By building applications that exploit this parallelism we can achieve acceleration factors of 30 to 100+ times the equivalent software version. We have built many diverse applications in biometrics, DNA familial search, data security, petascale indexing, and others. At Interconnect 2016 I’ll be highlighting two applications – massive graph network analytics and fuzzy logic based text/data analysis. More details on some of the DRC applications can be found at [here](http://drccomputer.com/solutions.html). + +https://www.youtube.com/watch?v=DZZuur8LXOY + +We are working closely with the CAPI group at IBM to integrate the DRC FPGA-based solutions into Power systems. One of the early results of this cooperation was a demonstration of the DRC graph network analytics at SC15 running on a [POWER8 system using a Xilinx FPGA](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/). + +OpenPOWER provides DRC with a large and rapidly expanding ecosystem that can help us build better solutions faster and offer partnerships that will vastly expand our market reach. The benefit for our customers will be a more fully integrated solution and improved application economics. In **[Session 6395 on Feb 23rd at 4:00pm PT](http://ibm.co/1QcEiUz)** I will be presenting this work with FPGAs at [IBM’s InterConnect Conference](http://ibm.co/1KsWIzQ) in Las Vegas as part of a four-person panel discussing OpenPOWER. + +In the session, I’ll cover the DRC graph networking analytics and fuzzy logic based text/data analysis. The graph networking system implements Dijkstra and Betweenness Centrality algorithms to discover and rank relationships between millions of people, places, events, objects, and more. This achieves in excess of 100x acceleration compared to a software-only version. As a least-cost path and centrality analysis, it has broad applicability in many areas including social networks analysis, distribution route planning, aircraft design, epidemiology, stock trading, etc. The fuzzy logic based text/data analytics was designed for social media analysis, and captures common social media misspellings, shorthand, and mixed language usage. The DRC product is tolerant of these and enables an analyst to do a score based approximate match on phrases or words they are searching for. We can search on hundreds of strings simultaneously on one FPGA achieving acceleration factors of 100x software applications. + +OpenPOWER is opening up whole new uses for FPGAs, and through the collaborative ecosystem, the greatest minds in the industry are working on unlocking the power of accelerators. In an era where performance of systems come not just from the chip, but across the entire system stack, OpenPOWER's new business model is the key to driving innovation and transforming businesses. Please join me at **[session 6395 on Feb 23rd at 4:00pm PT](http://ibm.co/1QcEiUz)**, and I look forward to collaborating with you and our fellow members in the OpenPOWER ecosystem. + +* * * + +_Roy Graham is the President and COO of DRC Computing Corp. and builds profitable revenue streams for emerging technologies including data analytics, communications, servers, human identification systems and hybrid applications. At Digital and Tandem Roy ran Product Management groups delivering > $10B in new revenue. Then he was SVP S&M at Wyse ($250M turnaround), and at Be (IPO) and CEO at 2 early stage web-based companies._ diff --git a/content/blog/e4-computer-engineering-showcases-full-line-of-openpower-hardware-at-international-supercomputing.md b/content/blog/e4-computer-engineering-showcases-full-line-of-openpower-hardware-at-international-supercomputing.md new file mode 100644 index 0000000..ed76305 --- /dev/null +++ b/content/blog/e4-computer-engineering-showcases-full-line-of-openpower-hardware-at-international-supercomputing.md @@ -0,0 +1,32 @@ +--- +title: "E4 Computer Engineering Showcases Full Line of OpenPOWER Hardware at International Supercomputing" +date: "2016-06-20" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Ludovica Delpiano, E4 Computing_ + +E4’s mission, to drive innovation by implementing and integrating cutting-edge solutions with the best performance for every high-end computing and storage requirement, is very much our focus for this year’s edition of ISC. We chose to [showcase](http://cms-it.e4company.com/media/35466/e4pr-accelerated-openpower-system-by-e4-computer-engineering-showcased-a.pdf) a number of systems at our booth, #914, based on one of the most advanced technologies available at the moment: accelerated POWER8 technology. + +## Showcasing OpenPOWER Servers + +\[caption id="attachment\_3933" align="alignleft" width="169"\]![E4 Computer Engineering at ISC](images/20160620_154652-1-169x300.jpg) E4 Computer Engineering at ISC\[/caption\] + +E4’s solutions at ISC16 represents a consistent alternative to standard x86 technology by providing the scientific and industrial researchers with fast performances for their complex processing applications. + +Our newest system, OP205 is our most advanced POWER8-based server designed for high performance computing and big data. It includes coherent accelerator processor interface (CAPI) enabled PCIe slots, and can host two NVIDIA K80 GPUs. Both technologies are designed to accelerate application performance with the POWER8 CPU. + +## Building Faster Servers with NVLink + +In addition, the OP Series is powered by the [NVIDIA Tesla Accelerated Computing Platform](http://www.nvidia.com/object/why-choose-tesla.html) and two out of the three solutions on display at our booth utilize the new [NVIDIA Tesla P100 GPU accelerators](http://www.nvidia.com/object/tesla-p100.html) with the high-bandwidth NVIDIA NVLink™ interconnect technology, which dramatically speeds up throughput and maximizes application performance. + +We are confident that the series can be a perfect match for complex workloads like in Oil & Gas, Finance, Big Data and all compute-intensive applications. + +We look forward to meet anyone attending the Conference who is interested in starting to get familiar with OpenPOWER. To do so, you just need to pop by booth #914 and our team will talk you through the various options. + +We see ISC as a perfect venue to launch this technology with the opportunity to talk to the people who may  benefit from it, to find out from them theapplications and codes that are most needed. + +## To learn more, visit us at [www.e4company.com](http://www.e4company.com). diff --git a/content/blog/early-application-experiences-summit-oak-ridge.md b/content/blog/early-application-experiences-summit-oak-ridge.md new file mode 100644 index 0000000..fd30c4b --- /dev/null +++ b/content/blog/early-application-experiences-summit-oak-ridge.md @@ -0,0 +1,30 @@ +--- +title: "Early Application Experiences on Summit at Oak Ridge National Laboratory" +date: "2018-12-18" +categories: + - "blogs" +tags: + - "featured" +--- + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM + +We recently held the [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/) at the Nimbix headquarters in Dallas, Texas. Having taken place just before SC18, this event allowed members of the Academia Discussion Group and other developers using OpenPOWER platforms to exchange results and enhance their technical knowledge and skills. + +One of the most interesting sessions was led by [Dr. Wayne Joubert](https://www.olcf.ornl.gov/directory/staff-member/wayne-joubert/), computational scientist in the Scientific Computing Group at the National Center for Computational Sciences at Oak Ridge National Laboratory (ORNL). Dr. Joubert shared insight into early application experiences on [Summit](https://www.olcf.ornl.gov/summit/), [the most powerful supercomputer in the world](https://www.top500.org/news/us-regains-top500-crown-with-summit-supercomputer-sierra-grabs-number-three-spot/). + +A number of teams have already started working on Summit in a variety of fields for various applications: + +- **Center for Accelerated Application Readiness (CAAR)** – [this group at ORNL](https://www.olcf.ornl.gov/caar/) is responsible for bringing applications forward to get them ready for next generation systems. So far, 13 CAAR teams have been involved from domains including astrophysics, chemistry, engineering and more. These were the first teams to get access to the first 1,080 Summit nodes (at present, 4,608 nodes are available). +- **Summit Early Science Program** – ORNL received 65 letters of intent and 47 full proposals for its [Summit Early Science Program](https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/summit-early-science-program-call-for-proposals/). Notably, about 20 percent of these included a machine learning component – a remarkable increase in interest for deep learning applications. +- **ACM Gordon Bell Prize** – The Gordon Bell Prize is awarded each year to recognize outstanding achievement in high-performance computing. Five finalist teams used Summit this year including [both winning teams](https://www.hpcwire.com/off-the-wire/acm-awards-2018-gordon-bell-prize-to-two-teams-for-work-combating-opioid-addiction-understanding-climate-change/) – “Attacking the Opioid Epidemic: Determining the Epistatic and Pleiotropic Genetic Architectures for Chronic Pain and Opioid Addiction” and “Exascale Deep Learning for Climate Analytics.” + +Overall, Dr. Joubert shared that, “Summit is a very, very powerful system. Users are already using it effective and we’re really excited about it.” + +View Dr. Joubert’s full session video and slides below. + + + + + +**[Early Application experiences on Summit](//www.slideshare.net/ganesannarayanasamy/early-application-experiences-on-summit "Early Application experiences on Summit ")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)** diff --git a/content/blog/easic-fpga-openpower.md b/content/blog/easic-fpga-openpower.md new file mode 100644 index 0000000..1b09c31 --- /dev/null +++ b/content/blog/easic-fpga-openpower.md @@ -0,0 +1,28 @@ +--- +title: "eASIC Brings Advanced FPGA Technology to OpenPOWER" +date: "2016-05-19" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Anil Godbole, Senior Marketing Manager, eASIC Corp._ + +![easic logo](images/easic-logo.png) [eASIC](http://www.easic.com) is very excited to join the OpenPOWER Foundation. One of the biggest value propositions of the [eASIC Platform](http://www.easic.com/products/) is to offer an FPGA design flow combined with ASIC-like performance and up to 80% lower power consumption. This allows the community to enable custom designed co-processor and accelerator solutions in datacenter applications such as searching, pattern-matching, signal and image processing, data analytics, video/image recognition, etc. + +## **Need for Power-efficient CPU Accelerators** + +The advent of multi-core CPUs/GPUs has helped to increase the performance of modern datacenters. However, this performance is being limited by a non-proportional increase in energy consumption. As workloads like Big Data analytics and Deep Neural Networks continue to evolve in size, there is a need for new computing paradigm which will continue scaling compute performance while keeping power consumption low. + +A key technique is to exploit parallelism during program execution. While multi-core processors can also execute in parallel, they burn a lot of energy when sharing data/messages between processors. That is because such data typically resides in off-chip RAMs and their accesses are very power hungry. + +## **eASIC Platform** + +The eASIC Platform uses distributed logic blocks with associated local memories which enable highly parallel and power efficient implementations of the most complex algorithms. With up to twice the performance of FPGAs and up to 80% lower power consumption the eASIC Platform can provide a highly efficient performance per watt for the most demanding algorithm.  The vast amount of storage provided by the local memories allows fast message and data transfers between the compute elements reducing latency and without incurring the power penalty of accessing off-chip RAM. + +## **CAPI Enhancements** + +CAPI defines a communication protocol for command/data transfers between the main processor and the accelerator device based on shared, coherent memory. Compared to traditional I/O- based protocols, CAPI’s approach precludes the need for O/S calls thereby significantly reducing the latency of program execution. + +Combining the benefits of eASIC Platform and CAPI protocol can lead to high performance and power-efficient Co-processor/Accelerator solutions. For more details on the eASIC Platform please feel free to contact us [www.easic.com](http://www.easic.com) or follow us on Twitter [@eASIC](https://twitter.com/easic). diff --git a/content/blog/easic-joins-the-openpower-foundation-to-offer-custom-designed-accelerator-chips.md b/content/blog/easic-joins-the-openpower-foundation-to-offer-custom-designed-accelerator-chips.md new file mode 100644 index 0000000..9f40bdf --- /dev/null +++ b/content/blog/easic-joins-the-openpower-foundation-to-offer-custom-designed-accelerator-chips.md @@ -0,0 +1,9 @@ +--- +title: "eASIC Joins the OpenPOWER Foundation to Offer Custom-designed Accelerator Chips" +date: "2016-05-04" +categories: + - "press-releases" + - "blogs" +--- + + diff --git a/content/blog/ecuador-supercomputing-yachay-openpower.md b/content/blog/ecuador-supercomputing-yachay-openpower.md new file mode 100644 index 0000000..c52f370 --- /dev/null +++ b/content/blog/ecuador-supercomputing-yachay-openpower.md @@ -0,0 +1,28 @@ +--- +title: "Expanding Ecuador’s Supercomputing Future with Yachay and OpenPOWER" +date: "2016-11-22" +categories: + - "blogs" +--- + +_By Alejandra Gando, Director of Communications, Yachay EP_ + +![ibm_yachay1](images/IBM_Yachay1.jpg) + +The pursuit of supercomputing represents a major step forward for Ecuador and Yachay EP, with IBM and OpenPOWER, is leading the way. + +[Yachay](http://www.yachay.gob.ec/yachay-ep-e-ibm-consolidan-acciones-de-alto-desarrollo-tecnologico-para-el-pais/?cm_mc_uid=18522278184214774002079&cm_mc_sid_50200000=1479849333), a planned city for technological innovation and knowledge intensive businesses combining the best ideas, human talent and state-of-the-art infrastructure, is tasked with creating the worldwide scientific applications necessary to achieve Good Living (Buen Vivir). In its constant quest to push Ecuador towards a knowledge-based economy, Yachay found a partner in OpenPOWER member IBM to create a source of information and research on issues such as oil, climate and food genomics. + +Yachay will benefit from state of the art technology, IBM’s new OpenPOWER LC servers infused with innovations developed by the OpenPOWER community, in the search and improvement of production of non-traditional exports based on the rich biodiversity of Ecuador. It will be able to use genomic research to improve the quality of products and become more competitive in the global market. Genomic research revolutionizes both the food industry and medicine. So far the local genomic field had slowly advanced by the amount of data, creating an obstacle to the investigation. + +"For IBM it is of great importance to provide an innovative solution for the country, the region and the world, in order to provide research and allow Ecuador to pioneer in areas such as genomics, environment and new sources of economic growth" says Patricio Espinosa, General Manager, IBM Ecuador. + +Installed in an infrastructure based on the IBM POWER8 servers and storage solutions with software implementation capacity of advanced analytics and cognitive computing infrastructure, this system acquired by Yachay EP enables the use of HPC real-time applications with large data volumes to expand capabilities of scientists to make quantitative predictions. IBM systems use a data-centric approach, integrating and linking data to predictive simulation techniques that expand the limits of scientific knowledge. + +The new supercomputing project will allow Yachay to foster projects with a higher technology component, to create simulations and to do projects with the capacity of impacting the way science is done in the country. + +Héctor Rodríguez, General Manager of the Public Company Yachay, noted with pride the consolidation of an increasingly strong ecosystem for innovation, entrepreneurship and technological development in Ecuador. + +Once the supercomputer is in place the researchers at Yachay will be able to work in projects that require supercomputing enabling better and faster results. By using the power of high performance computing in these analyzes it enables different organizations or companies to maximize their operations and minimize latency of their systems, allowing them to obtain further findings in their research. + +Want to learn more? Visit [www.ciudadyachay.com](http://www.ciudadyachay.com) (available in English and Spanish) follow us on Twitter at @CiudadYachay. diff --git a/content/blog/eict-academy-training.md b/content/blog/eict-academy-training.md new file mode 100644 index 0000000..e049c0a --- /dev/null +++ b/content/blog/eict-academy-training.md @@ -0,0 +1,24 @@ +--- +title: "Get Ready for OpenPOWER: A Technical Training Session with E&ICT Academy in India" +date: "2019-02-15" +categories: + - "blogs" +tags: + - "featured" +--- + +By Ganesan Narayanasamy + +![OpenPOWER and Data Analytics](images/EICT-1024x575.png) + +Professor [R.B.V Subramaanyam, Ph.D.,](https://www.nitw.ac.in/faculty/id/16341/) a computer science professor at the National Institute of Technology, Warangal India, recently organized a six-day faculty development program as part of the [Electronics & ICT Academy](http://eict.iitg.ac.in/). More than 40 faculty members and researchers in Southern India participated in the workshop. + +One full day of the program was dedicated to learning about OpenPOWER. I was happy to take the opportunity to deliver technical sessions on Spark, Spark ML and Internals along with my colleague and IBM Technical lead [Josiah Samuel](https://www.linkedin.com/in/josiahsams/?originalSubdomain=in). + +Josiah covered a Spark overview, Spark SQL, Spark Internals and Spark ML. He conveyed IBM’s involvement in these open source technologies, and discussed features of Power Systems’ capabilities in artificial intelligence and high-powered computing. One key differentiator focused on was incorporating Nvidia GPUs into Power servers along with NVLink connections. + +We shared materials and code with the faculty and researchers after the interactive session, so they can continue to develop their knowledge and skills. Rich technology training sessions like this one offer the opportunity for faculties to learn more about the OpenPOWER stack! + + + +**[Power Software Development with Apache Spark](//www.slideshare.net/OpenPOWERorg/power-software-development-with-apache-spark "Power Software Development with Apache Spark")** from **[OpenPOWERorg](https://www.slideshare.net/OpenPOWERorg)** diff --git a/content/blog/enabling-coherent-fpga-acceleration.md b/content/blog/enabling-coherent-fpga-acceleration.md new file mode 100644 index 0000000..939523a --- /dev/null +++ b/content/blog/enabling-coherent-fpga-acceleration.md @@ -0,0 +1,30 @@ +--- +title: "Enabling Coherent FPGA Acceleration" +date: "2015-01-16" +categories: + - "blogs" +--- + +**Speaker:** [Allan Cantle](https://www.linkedin.com/profile/view?id=1004910&authType=NAME_SEARCH&authToken=ckHg&locale=en_US&srchid=32272301421438603123&srchindex=1&srchtotal=1&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438603123%2CVSRPtargetId%3A1004910%2CVSRPcmpt%3Aprimary) – President & Founder, Nallatech **Speaker Organization:** ISI / Nallatech + +### Presentation Objective + +To introduce the audience to IBM’s Coherent Attached Processor Interface, CAPI, Hardware Development Kit, HDK, that is provided by Nallatech and provide an overview of FPGA Acceleration. + +### Abstract + +Heterogeneous Computing and the use of accelerators is becoming a generally accepted method of delivering efficient application acceleration. However, to date, there has been a lack of coordinated efforts to establish open industry standard methods for attaching and communicating between host processors and the various accelerators that are available today. With IBM’s OpenPOWER Foundation initiative, we now have the opportunity to effectively address this issue and dramatically improve the use and adoption of Accelerators. + +The presentation will introduce CAPI, Coherent Accelerator Processor Interface, to the audience and will detail the CAPI HDK, Hardware Development Kit, implementation that is offered to OpenPOWER customers through Nallatech. Several high level examples will be presented that show where FPGA acceleration brings significant performance gains and how these can often be further advantaged by the Coherent CAPI interface. Programming methodologies of the accelerator will also be explored where customers can either leverage pre-compiled accelerated libraries that run on the accelerator or where they can write their own Accelerated functions in OpenCL. + +### Speaker Bio + +Allan is the founder of Nallatech, established in 1993, that specializes in compute acceleration using FPGAs. As CEO, Allan focused Nallatech on helping customer’s port critical codes to Nallatech’s range of FPGA accelerators and pioneered several early tools that increased porting productivity. His prior background, with BAE Systems, was heavily involved in architecting Real Time, Heterogeneous Computers that tested live weapon systems and contained many parallel processors including Microprocessors, DSPs and FPGAs. Allan holds a 1st Class Honors EE BEng Degree from Plymouth University and a MSC in Corporate Leadership from Napier University. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Cantle_OPFS2015_Nallatech_031315_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/enabling-financial-service-firms-to-compute-heterogeneously-with-gateware-defined-networking-gdn-to-build-order-books-and-trade-with-the-lowest-latency.md b/content/blog/enabling-financial-service-firms-to-compute-heterogeneously-with-gateware-defined-networking-gdn-to-build-order-books-and-trade-with-the-lowest-latency.md new file mode 100644 index 0000000..6001ad9 --- /dev/null +++ b/content/blog/enabling-financial-service-firms-to-compute-heterogeneously-with-gateware-defined-networking-gdn-to-build-order-books-and-trade-with-the-lowest-latency.md @@ -0,0 +1,34 @@ +--- +title: "Enabling financial service firms to compute heterogeneously with Gateware Defined Networking (GDN) to build order books and trade with the lowest latency." +date: "2015-01-16" +categories: + - "blogs" +--- + +### Abstract and Objectives + +Stock, futures, and option exchanges; market makers; hedge funds; and traders require real-time  knowledge of the best bid and ask prices for the instruments that they trade. By monitoring live market data feeds and computing an order book with Field Programmable Gate Array (FPGA) logic, these firms can track the balance of pending orders for equities, futures, and options with sub-microsecond latency. Tracking the open orders by all participants ensures that the market is fair, liquidity is made available, trades are profitable, and jitter is avoided during bursts of market activity. + +Algo-Logic has developed multiple Gateware Defined Networking (GDN) algorithms and components to support ultra-low-latency processing functions in heterogeneous computing systems. In this work, we demonstrate an ultralow latency order book that runs in FPGA logic in an IBM POWER8 server, which includes an ultra-low-latency 10 Gigabit/second Ethernet MAC, a market data feed handler, a fast key/value store for tracking level 3 orders, logic to sort orders, and a standard PSL interface which transfers level 2 market snapshots for multiple trading instruments into shared memory. Algo-Logic implemented all of these algorithms and components in logic on an Altera Stratix V A7 FPGA on a Nallatech CORSA card. Sorted L2 books are transferred over the IBM CAPI bus into cache lines of system memory. By implementing the entire feed processing module and order book in logic, the system enables software on the POWER8 server to directly receive market data snapshots with the least possible theoretical latency and jitter. + +As a member of the Open Power Foundation (OPF), Algo-Logic provides an open Application Programming Interface (API) that allows traders to select which instruments they wish to track and how often they want snapshots to be transferred to memory. These commands, in turn, are transferred across the IBM-provided Power Service Layer (PSL) to the algorithms that run in logic on the FPGA. Thereafter, trading algorithms running in software on any of the 96 hyper-threads in a two-socket POWER8 server can readily access the market data directly from shared memory. When combined with a Graphics Processing Unit, a dual-socket POWER8 system optimally leverages the fastest computation from up to 96 CPU threads, high-throughput vector processing from hundreds of GPU cores, and the ultra-low latency from thousands of fine-grain state machines in FPGA logic to implement a truly heterogeneous solution that achieves better performance than could be achieved with homogeneous computation running only in software. + +### Presenter Bio + +John W. Lockwood, CEO of Algo-Logic Systems, Inc., is an expert in building FPGA-accelerated applications. He has founded three companies focused on low latency networking, Internet security, and electronic commerce and has worked at the National Center for Supercomputing Applications (NCSA), AT&T Bell Laboratories, IBM, and Science Applications International Corp (SAIC). As a professor at Stanford University, he managed the NetFPGA program from 2007 to 2009 and grew the Beta program from 10 to 1,021 cards deployed worldwide. As a tenured professor, he created and led the Reconfigurable Network Group within the Applied Research Laboratory at Washington University in St. Louis. He has published over 100 papers and patents on topics related to networking with FPGAs and served as served as principal investigator on dozens of federal and corporate grants. He holds BS, + +MS, PhD degrees in Electrical and Computer Engineering from the University of Illinois at Urbana/Champaign and is a member of IEEE, ACM, and Tau Beta Pi. + +### About Algo-Logic Systems + +Algo-Logic Systems is a recognized leader of Gateware Defined Networking® (GDN) solutions built with Field + +Programmable Gate Array (FPGA) logic. Algo-Logic uses gateware to accelerate datacenter services, lower latency in financial trading networks, and provide deterministic latency for real-time Internet devices. The company has extensive experience building datacenter switches, trading systems, and real-time data processing systems in reprogrammable logic. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Lockwood_John-Algo-Logic_OPFS2015_031715_v4.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/european-market-adopting-openpower-technology-accelerated-pace.md b/content/blog/european-market-adopting-openpower-technology-accelerated-pace.md new file mode 100644 index 0000000..96062b8 --- /dev/null +++ b/content/blog/european-market-adopting-openpower-technology-accelerated-pace.md @@ -0,0 +1,81 @@ +--- +title: "European Market Adopting OpenPOWER Technology at Accelerated Pace" +date: "2016-10-27" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +_Widespread Adoption of OpenPOWER Technology Across Europe for Artificial Intelligence, Deep Learning and World-Advancing Research including the Human Brain Project_ + +_Developer Momentum Continues with European OpenPOWER Developer Cloud, CAPI SNAP Framework Tool, OpenPOWER READY Accelerator Boards and Winners of Developer Challenge Revealed_ + +_European OpenPOWER Community Grows to 60 Members Strong_ + +Barcelona, Spain, October 27, 2016: At the inaugural OpenPOWER European Summit, the OpenPOWER Foundation made a series of announcements today detailing the rapid growth, adoption and support of OpenPOWER across the continent. Members announced: + +- a series of European-based OpenPOWER technology implementations advancing corporate innovation and driving important world research including the Human Brain Project; +- a new set of developer resources, including an OpenPOWER developer cloud for European organizations and students; and +- new OpenPOWER-based solutions designed to improve performance for modern, new workloads including artificial intelligence, deep learning, accelerated analytics and high performance computing. + +The [OpenPOWER Foundation](http://www.openpowerfoundation.org/) is a global technology development community with more than 270 members worldwide supporting new product design, development and implementation on top of the high performing, open POWER processor. Many of the OpenPOWER-based technologies developed by OpenPOWER members in Europe are being used to help meet the unique needs of corporations running some of the largest data centers in the world and by researchers exploring high performance computing solutions to help solve some of the world’s greatest challenges. + +“Data growth in virtually every industry is forcing companies and organizations to change the way they consume, innovate around and manage IT infrastructure,” said Calista Redmond, President of the OpenPOWER Foundation. “Commodity platforms are proving ineffective when it comes to ingesting and making sense of the 2.5 billion GBs of data being created daily. With today’s announcements by our European members, the OpenPOWER Foundation expands its reach, bringing open source, high performing, flexible and scalable solutions to organizations worldwide.” + +**New OpenPOWER Deployments and Offerings in Europe** At the Summit, European technology leaders announced important deployments, offerings and research collaborations involving OpenPOWER-based technology. They include: + +- **FRANCE** – GENCI (Grand Equipement National pour le Calcul Intensif), France’s large national research facility for high performance computing, has launched a technology watch collaborative initiative to prepare French scientific communities to the challenges of exascale, and to anticipate novel architectures for future procurements of Tier1 and Tier0 in France. OpenPOWER technology has been identified as one of the leading architectures within this initiative. +- **GERMANY** – In support of the Human Brain Project, a research project funded by the European Commission to advance understanding of the human brain, OpenPOWER members IBM, NVIDIA and the Juelich Supercomputing Centre [delivered a pilot system](https://openpowerfoundation.org/blogs/advancing-human-brain-project-openpower/) as part of the Pre-Commercial Procurement process. Called JURON, the new supercomputer leverages IBM’s new Power S822LC for High Performance Computing system which features unique CPU-to-GPU NVIDIA NVLink technology.  As part of the system installation, the OpenPOWER members delivered to the Human Brain Project a set of key and unique research assets such as Direct Storage Class Memory Access, flexible Platform LSF extensions that allow dynamic job resizing, as well as a port of workhorse Neuroscience codes on the new OpenPOWER-based architecture. +- **SPAIN** – The Barcelona Supercomputing Center (BSC) [announced](https://www.bsc.es/about-bsc/press/bsc-in-the-media/bsc-joins-openpower-foundation) it is using OpenPOWER technology for work underway at the IBM-BSC Deep Learning Center.  At the joint center, IBM and BSC scientists are developing new algorithms to improve and expand the cognitive capabilities of deep learning systems. +- **TURKEY** – SC3 Electronics, a leading cloud supercomputing center in Turkey, announced the company is creating the largest HPC cluster in the Middle East and North Africa region based on one of IBM’s new OpenPOWER LC servers – the Power S822LC for High Performance Computing – which takes advantage of NVIDIA NVLink technology and the latest NVIDIA GPUs. According to SC3 Executive Vice President Emre Bilgi, this is an important milestone for Turkey's journey into HPC leadership.  Once installed, the cluster will be deployed internally and will also support new cloud services planned to be available by the end of the year. + +These deployments come as OpenPOWER innovations around accelerated computing, storage, and networking via the high-speed interfaces of NVIDIA NVLink and the newly formed open standard OpenCAPI, gain adoption in the datacenter. + +**Developer Momentum** To further support a growing demand for OpenPOWER developer resources in Europe and worldwide, OpenPOWER members announced: + +- **New European developer cloud** – In a significant expansion of developer resources, members of the OpenPOWER Foundation in collaboration with the [Technical University of Munich](http://www.tum.de/) at the [Department of Informatics](http://www.in.tum.de/) announced plans to launch the European arm of the development and research cloud called Supervessel. First launched in China, Supervessel is the cloud platform built on top of POWER’s open architecture and technologies. It aims to provide the open remote access for all the ecosystem developers and university students. With the importance of data sovereignty in Europe, this installment of Supervessel will enable students and developers to innovate applications on the OpenPOWER platform locally, enabling individuals to create new technology while following local data regulations. Supervessel Europe is expected to launch before the end of 2016. +- **CAPI SNAP Framework** – Developed by European and North American based OpenPOWER members IBM, Xilinx, Reconfigure.io, Eideticom, Rackspace, Alpha Data and Nallatech, the [CAPI SNAP Framework](https://openpowerfoundation.org/blogs/openpower-makes-fpga-acceleration-snap/) is available in beta to developers worldwide.  It is designed to make FPGA acceleration technology from the OpenPOWER Foundation easier to implement and more accessible to the developer community. +- **OpenPOWER READY FPGA Accelerator Boards** – Alpha Data, a United Kingdom and North American based leading supplier of high-performance FPGA solutions, [showcased](http://www.alpha-data.com/news.php) a line of low latency, low power, OpenPOWER READY compliant FPGA accelerator boards.  The production-ready PCIe accelerator boards are intended for datacenter applications requiring high-throughput processing and software acceleration. +- **OpenPOWER Developer Challenge Winners** – After evaluating the work of more than 300 developers that participated in the inaugural OpenPOWER Developer Challenge, the OpenPOWER Foundation announced [four Grand Prize winners](https://openpowerfoundation.org/blogs/openpower-developer-challenge-winners/).  The developers received a collective total of $15,000 in prizes recognizing their OpenPOWER-based development projects including: + - [Emergency Prediction on Spark](http://devpost.com/software/emergencypredictiononspark): Antonio Carlos Furtado from the University of Alberta predicts Seattle emergency call volumes with Deep Learning on OpenPOWER; + - [TensorFlow Cancer Detection](http://devpost.com/software/distributedtensorflow4cancerdetection): Altoros Labs brings a turbo boost to automated cancer detection with OpenPOWER; + - [ArtNet Genre Classifier](http://devpost.com/software/artnet-genre-classifier): Praveen Sridhar and Pranav Sridhar turn OpenPOWER into an art connoisseur; and + - [Scaling Up and Out a Bioinformatics Algorithm](http://devpost.com/software/scaling-up-and-out-a-bioinformatics-algorithm): Delft University of Technology advances precision medicine by scaling up and out on OpenPOWER. + +**Expanded European Ecosystem** Across Europe, technology leaders continue to join the OpenPOWER Foundation, bringing the European roster to a total of 60 members today. Increased membership drives further innovation in areas like acceleration, networking, storage and software all optimized for the OpenPOWER platform. Some of the most recent European members to bring their expertise to the broader OpenPOWER ecosystem in 2016 include: + +- from Belgium – Calyos +- from France – GENCI, Splitted-Desktop Systems +- from Germany – IndependIT Integrative Technologies, LRZ, Paderborn University, Technical University of Munich, ThinkParQ, Thomas-Kren AG +- from Greece – University of Peloponnese +- from The Netherlands – Delft University of Technology, Synerscope +- from Norway – Dolphin Interconnect Solutions +- from Russia – Cognitive Technologies +- from Spain – Barcelona Supercomputing Center +- from Switzerland – Groupe T2i SA, Kolab Systems AG +- from Turkey – SC3 Electronics +- from the United Kingdom – Quru, Reconfigure.io, University of Exeter, University of Oxford + +**About the OpenPOWER Foundation** The OpenPOWER Foundation is a global, open development membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. OpenPOWER members share expertise, investment and server-class intellectual property to develop solutions that serve the evolving needs of technology customers. + +The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. For further details visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org). + +\# # # + +Media Contact: Crystal Monahan Text100 for OpenPOWER Tel: +1 617.399.4921 Email: [crystal.monahan@text100.com](mailto:crystal.monahan@text100.com) + +**Supporting Quotes from OpenPOWER Foundation European Members** + +**Barcelona Supercomputing Center** "We feel honored to become a member of the OpenPOWER Foundation,” said Mateo Valero, Director of the Barcelona Supercomputing Center. “Working closely with the OpenPOWER community will give us the opportunity to collaborate with other leading institutions in high performance architectures, programming models and applications.” + +**Cognitive Technologies** “We see OpenPOWER technology and innovation as key enablers for our Autonomous Driving technology and Neural Network capability,” said Andrey Chernogorov, CEO of Cognitive Technologies, an active driver assistance systems developer. “We believe that our major competitive advantage is the robust artificial intelligence that our system is based on. It makes it possible for the autonomous vehicle control system to firmly operate in bad weather conditions and on bad or damaged roads with no road marking. Since over 70% of the roads in the world can be considered as ‘bad’ we plan to become a global market leader. At the moment our major competitor is the Israeli developer Mobileye.” + +**Jülich Supercomputing Centre** “For a leading provider of computing resources for science, OpenPOWER is an exciting opportunity to create future supercomputing infrastructure and enable new science,” said Dr. Dirk Pleiter, Research Group Leader, Jülich Supercomputing Centre. + +**SC3** “Having seen foremost Internet giants starting up the OpenPOWER Foundation even before the vast wide and deep global hardware (including CPU, GPU, Memory, NVM, Networking, FPGA, ODM's) community, software (OS and Applications) providers and services industry, as well as academic and scientific who's who institutions, become a truly impressive ecosystem, convinced us to join and and contribute to this great organization with high enthusiasm,” said SC3 Executive Vice President Emre Bilgi.   “After a global search of over two years for our supercomputing architecture, we see great opportunities in the OpenPOWER Foundation today and in the future." + +**ThinkParQ** "It is very important for our customers that BeeGFS delivers highest I/O performance and takes full advantage of the latest technologies,” said ThinkParQ CEO Sven Breuner. “The OpenPOWER platform comes with outstanding performance features and has a very promising roadmap, which make it an ideal basis for such demanding applications." + +![openpower_europe_slide-02_02-1](images/OpenPOWER_Europe_Slide-02_02-1.jpg) diff --git a/content/blog/evaluating-julia-for-deep-learning-on-power-systems-nvidia-hardware.md b/content/blog/evaluating-julia-for-deep-learning-on-power-systems-nvidia-hardware.md new file mode 100644 index 0000000..eacdf07 --- /dev/null +++ b/content/blog/evaluating-julia-for-deep-learning-on-power-systems-nvidia-hardware.md @@ -0,0 +1,126 @@ +--- +title: "Evaluating Julia for Deep Learning on Power Systems + NVIDIA Hardware" +date: "2016-11-14" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Deepak Vinchhi, Co-Founder and Chief Operating Officer, Julia Computing, Inc._ + +Deep Learning is now ubiquitous in the machine learning world, with useful applications in a number of areas. In this blog post, we explore the use of Julia for deep learning experiments on Power Systems + NVIDIA hardware. + +We shall demonstrate: + +1. The ease of specifying deep neural network architectures in Julia and visualizing them. We use MXNet.jl, a Julia package for deep learning. +2. The ease of running Julia on Power Systems. We ran all our experiments on a PowerNV 8335-GCA, which has 160 CPU cores, and a Tesla K80 (dual) GPU accelerator. IBM and [OSUOSL](http://osuosl.org/) have generously provided us with the infrastructure for this analysis. + +## **Introduction** + +Deep neural networks have been around since the [1940s](http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/neural4.html), but have only recently been deployed in research and analytics because of strides and improvements in computational horsepower. Neural networks have a wide range of applications in machine learning: vision, speech processing, and even [self-driving cars](https://blogs.nvidia.com/blog/2016/06/10/nyu-nvidia/). An interesting use case for neural networks could be the ability to drive down costs in medical diagnosis. Automated detection of diseases would be of immense help to doctors, especially in places around the world where access to healthcare is limited. + +[Diabetic retinopathy](https://en.wikipedia.org/wiki/Diabetic_retinopathy) is an eye disease brought on by diabetes. There are over 126.2 million people in the world (as of 2010) with diabetic retinopathy, and this is [expected](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3491270/) to rise to over 191.2 million by 2030. According to the WHO in 2006, it [accounted](http://www.who.int/blindness/causes/priority/en/index5.html) for 5% of world blindness. + +Hence, early automatic detection of diabetic retinopathy would be desirable. To that end, we took up an image classification problem using real clinical data. The data was provided to us by [Drishti Care](http://drishticare.org), which is a social enterprise that provides affordable eye care in India. We obtained a number of eye [fundus](https://en.wikipedia.org/wiki/Fundus_(eye)) images from a variety of patients. The eyes affected by retinopathy are generally marked by inflamed veins and cotton spots. The following picture on the left is a normal fundus image whereas the one on the right is affected by diabetic retinopathy. + +![julia-1](images/Julia-1.png) + +## **Setup** + +We built MXNet from source with CUDA and OpenCV. This was essential for training our networks on GPUs with CUDNN, and reading our image record files. We had to build GCC 4.8 from source so that our various libraries could compile and link without error, but once we did, we were set up and ready to start working with the data. + +## **The Hardware: IBM Power Systems** + +We chose to run this experiment on an IBM Power System because, at the time of this writing, we believe it is the best environment available for this sort of work. The Power platform is ideal for deep learning, big data, and machine learning due to its high performance, large caches, 2x-3x higher memory bandwidth, very high I/O bandwidth, and of course, tight integration with GPU accelerators. The parallel multi-threaded Power architecture with high memory and I/O bandwidth is particularly well adapted to ensure that GPUs are used to their fullest potential. + +We’re also encouraged by the industry’s commitment to the platform, especially with regard to AI, noting that NVIDIA made its premier machine learning-focused GPU (the Tesla P100) available on Power well before the x86, and that innovations like NVLink are only available on Power. + +## **The Model** + +The idea is to train a deep neural network to classify all these fundus images into infected and uninfected images. Along with the fundus images, we have at our disposal a number of training labels identifying if the patient is infected or not. + +We used [MXNet.jl](https://github.com/dmlc/MXNet.jl), a powerful Julia package for deep learning. This package allows the user to use a high level syntax to easily specify and chain together large neural networks. One can then train these networks on a variety of heterogeneous platforms with multi-GPU acceleration. + +As a first step, it’s good to load a pretrained model which is known to be good at classifying images. So we decided to download and use the [ImageNet model called Inception](https://research.googleblog.com/2016/03/train-your-own-image-classifier-with.html) with weights in their 39th epoch. On top of that we specify a simple classifier. + +\# Extend model as we wish + arch = mx.@chain mx.get\_internals(inception)\[:global\_pool\_output\] => + mx.Flatten() => + mx.FullyConnected(num\_hidden = 128) => + mx.Activation(act\_type=:relu) => + mx.FullyConnected(num\_hidden = 2) => + mx.WSoftmax(name = :softmax) + +And now we train our model: + +mx.fit( + model, + optimizer, + dp, + n\_epoch = 100, + eval\_data = test\_data, + callbacks = \[ + mx.every\_n\_epoch(save\_acc, 1, call\_on\_0=false), + mx.do\_checkpoint(prefix, save\_epoch\_0=true), + \], +eval\_metric = mx.MultiMetric(\[mx.Accuracy(), WMultiACE(2)\]) +) + +One feature of the data is that it is highly [imbalanced](http://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/). For every 200 uninfected images, we have only 3 infected images. One way of approaching that scenario is to penalize the network heavily for every infected case it gets wrong. So we replaced the normal Softmax layer towards the end of the network with a _weighted_ softmax. To check whether we are overfitting, we selected multiple [performance metrics](http://machinelearningmastery.com/classification-accuracy-is-not-enough-more-performance-measures-you-can-use/). + +However, from our [cross-entropy](https://www.wikiwand.com/en/Cross_entropy) measures, we found that we were still overfitting. With fast training times on dual GPUs, we trained our model quickly to understand the drawbacks of our current approach. + +\[caption id="attachment\_4362" align="aligncenter" width="625"\]![Performance Comparison between CPU and GPU on Training](images/julia-2-1024x587.png) Performance Comparison between CPU and GPU on Training\[/caption\] + +Therefore we decided to employ a different approach. + +The second way to deal with our imbalanced dataset is to generate smaller, more balanced datasets that contained roughly equal numbers of uninfected images and infected images. We produced two datasets: one for training and another for cross validation, both of which had the same number of uninfected and infected patients. + +Additionally, we also decided to shuffle our data. Every epoch, we resampled the uninfected images from the larger pool of uninfected images (and they were many in number) in the training dataset to expose the model to a range of uninfected images so that it can generalize well. Then we started doing the same to the infected images. This was quite simple to implement in Julia: we simply had to overload a particular function and modify the data. + +Most of these steps were done incrementally. Our Julia setup and environment made it easy for us to quickly change code and train models and incrementally add more tweaks and modifications to our models as well as our training methods. + +We also augmented our data by adding low levels of Gaussian noise to random images from both the uninfected images and the infected images. Additionally, some images were randomly rotated by 180 degrees. Rotations are quite ideal for this use case because the important spatial features would be preserved. This artificially expanded our training set. + +However, we found that while these measures stopped our model from overfitting, we could not obtain adequate performance. We explore the possible reason for this in the subsequent section. + +## **Challenges** + +Since the different approaches we outlined in the previous section were easy to implement within our Julia framework, our experimentation could be done quickly and these various challenges were easy to pinpoint. + +The initial challenge we faced was that our data is imbalanced, and so we experimented with penalizing incorrect decisions made by the classifier. We tried generating a balanced (yet smaller) dataset in the first place and then it turned out that we were overfitting. To counter this, we performed the shuffling and data augmentation techniques. But we didn’t get much performance from the model. + +Why is that so? Why is it that a model as deep as Inception wasn’t able to train effectively on our dataset? + +The answer, we believe, lies in the data itself. On a randomized sample from the data, we found that there were two inherent problems with the data: firstly, there are highly blurred images with no features among both healthy and infected retinas. + +\[caption id="attachment\_4363" align="alignnone" width="300"\]![Images such as these make it difficult to extract features](images/Julia-3-300x225.png) Images such as these make it difficult to extract features\[/caption\] + +Secondly, there are some features in the healthy images that one might expect to find in the infected images. For instance, in some images the veins are somewhat puffed, and in others there are cotton spots. Below are some examples. While we note that the picture on the left is undoubtedly infected, notice that one on the right also has a few cotton spots and inflamed veins. So how does one differentiate? More importantly, how does our model differentiate? + +![julia-4](images/Julia-4.png) + +So what do we do about this? For the training set, it would be helpful to have each image, rather than each patient, independently diagnosed as healthy or infected by a doctor or by two doctors working independently. This would likely improve the model’s predictions. + +## **The Julia Advantage** + +Julia provides a distinct advantage at every stage for scientists engaged in machine learning and deep learning. + +First, Julia is very efficient at preprocessing data. A very important first step in any machine learning experiment is to organize, clean up and preprocess large amounts of data. This was extremely efficient in our Julia environment, which is known to be orders of magnitude faster in comparable environments such as Python. + +Second, Julia enables elegant code. Our models were chained together using Julia’s flexible syntax. Macros, metaprogramming and syntax familiar to users of any technical environment allows for easy-to-read code. + +Third, Julia facilitates innovation. Since Julia is a first-class technical computing environment, we can easily deploy the models we create without changing any code. Julia hence solves the famous “two-language” problem, by obviating the need for different languages for prototyping and production. + +Due to all the aforementioned advantages, we were able to complete these experiments in a very short period of time compared with other comparable technical computing environments. + +## **Call for Collaboration** + +We have demonstrated in this blog post how to write an image classifier based on deep neural networks in Julia and how easy it is to perform multiple experiments. Unfortunately, there are challenges with the dataset that required more fine-grained labelling. We have reached out to appropriate experts for assistance in this regard. + +Users who are interested in working with the dataset and possibly collaborating on this with us are invited to reach out via email to [ranjan@juliacomputing.com](mailto:ranjan@juliacomputing.com) to discuss access to the dataset. + +## **Acknowledgements** + +I should thank a number of people for helping me with this work: [Valentin Churavy](https://github.com/vchuravy) and [Pontus Stenetorp](https://github.com/ninjin) for guiding and mentoring me, and [Viral Shah](https://github.com/ViralBShah) of Julia Computing. Thanks to IBM and OSUOSL too for providing the hardware, as well as Drishti Care for providing the data. diff --git a/content/blog/exascale-simulations-of-stellar-explosions-with-flash-on-summit.md b/content/blog/exascale-simulations-of-stellar-explosions-with-flash-on-summit.md new file mode 100644 index 0000000..793c5d8 --- /dev/null +++ b/content/blog/exascale-simulations-of-stellar-explosions-with-flash-on-summit.md @@ -0,0 +1,30 @@ +--- +title: "Exascale Simulations of Stellar Explosions with FLASH on Summit" +date: "2019-01-24" +categories: + - "blogs" +tags: + - "featured" +--- + +_Featuring OpenPOWER Member: [Oak Ridge National Laboratory](https://www.ornl.gov/)_ + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM + +At the [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/), developers using OpenPOWER platforms shared case studies on the work they’re doing using OpenPOWER platforms. One particularly interesting session was led by [James Austin Harris](https://www.olcf.ornl.gov/directory/staff-member/james-harris/), postdoctoral research associate and member of the FLASH Center For Accelerated Application Readiness (CAAR) project at Oak Ridge National Laboratory (ORNL). + +Harris and his group at ORNL study supernovae and their nucleosynthetic products to improve our understanding of the origins of the heavy elements in nature. His session focused on exascale simulations of stellar explosions using FLASH. FLASH is a publicly available, component-based MPI+OpenMP parallel, adaptive mesh refinement (AMR) code that has been used on a variety of parallel platforms from astrophysics, high-energy-density physics, and more. It’s ideal for studying nucleosynthesis in supernovae due to its multi-physics and AMR capabilities. + +The work is primarily focused on increasing physical fidelity by accelerating the nuclear burning module and associated load balancing. And using [Summit](https://www.olcf.ornl.gov/summit/), [the most powerful supercomputer in the world](https://www.top500.org/news/us-regains-top500-crown-with-summit-supercomputer-sierra-grabs-number-three-spot/), had an enormous impact. + +Summit GPU performance fundamentally changes the potential science impact by enabling large-network (160 or more nuclear species) simulations. Preliminary results on Summit indicate the time for 160-species run on Summit was roughly equal to 13-species previously run on Titan. In other words, greater than 100x the computation at an identical cost. + +Overall the CAAR group has had a very positive experience with Summit, and still have more work to do including exploring hydrodynamics, gravity and radiation transport. + +View Harris’ full session video and slides below. + +https://www.youtube.com/watch?v=5e6IUzl6A6Q + + + +**[Towards Exascale Simulations of Stellar Explosions with FLASH](//www.slideshare.net/ganesannarayanasamy/towards-exascale-simulations-of-stellar-explosions-with-flash "Towards Exascale Simulations of Stellar Explosions with FLASH")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)** diff --git a/content/blog/exploring-the-fundamentals-of-openpower-power9-and-powerai-at-the-university-of-reims.md b/content/blog/exploring-the-fundamentals-of-openpower-power9-and-powerai-at-the-university-of-reims.md new file mode 100644 index 0000000..14bafd9 --- /dev/null +++ b/content/blog/exploring-the-fundamentals-of-openpower-power9-and-powerai-at-the-university-of-reims.md @@ -0,0 +1,71 @@ +--- +title: "Exploring the Fundamentals of OpenPOWER, POWER9 and PowerAI at the University of Reims" +date: "2019-06-25" +categories: + - "blogs" +tags: + - "featured" + - "power9" + - "ibm-power-systems" + - "barcelona-supercomputing-center" + - "powerai" + - "ebv-elektronik" +--- + +By Professor Michaël Krajecki, Université de Reims Champagne-Ardenne + +Last month, the University of Reims hosted a workshop introducing the fundamentals of the OpenPOWER Foundation, POWER9 and PowerAI. Students and faculty from the University were joined by experts from [IBM POWER Systems](https://www.ibm.com/it-infrastructure/power), [EBV Elektronik](https://www.avnet.com/wps/portal/ebv/) and the [Barcelona Supercomputing Center](https://www.bsc.es/) for a great session! + +![](images/Reims.png) + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +Multiple topics relating to POWER9, deep learning and PowerAI were discussed. + +- - [Thibaud Besson](https://fr.linkedin.com/in/thibaud-besson-3476b42b), IBM Power Systems: **Fundamentals of OpenPOWER Foundation, POWER9 and PowerAI**: Besson discussed why POWER9 is the most state-of-the-art computing architecture developed with AI workloads in mind. He also showcased PowerAI, the software side of the solution, explaining its ease of use and straightforward installation that reduces time to market for implementors. + +  + +- - [Franck Maul](https://fr.linkedin.com/in/franck-maul-76bba74), EBV Elektronik: **On Xilinx Offerings**: Maul presented Xilinx products that are going to revolutionize the AI market in the near future, explaining why Xilinx’s offering is the best fit for customers in the current market. He also showed off Xilinx FPGAs, emphasizing their perfect fit with IBM AC922 servers. + +  + +- - [Dr. Guillaume Houzeauk](https://www.linkedin.com/in/guillaume-houzeaux-0079b02/?originalSubdomain=es), The Barcelona Supercomputing Center: **How Fluid Dynamics Can Be Implemented on POWER9 and AC922 Servers**: In one of the day’s more technical sessions, BSC examined how a major Spanish car manufacturer has implemented Fluid Dynamics in a cluster of AC922 servers to improve automotive design and to reduce product cost and cycle time. + +  + +- - Ander Ochoa Gilo, IBM: **Distributed Deep Learning and Large Model Support**: Ochoa Gilo dove into the benefits of deep learning, showing not only how we can overcommit the memory of the GPUs both in Caffe and Tensorflow, but also how to implement it. Using live examples, Ochoa Gilo explained how deep learning is accelerated through AC922 servers, allowing users to select images with up to 10x more resolution vs x86 alternatives. + +  + +He also demonstrated another useful feature of PowerAI, distributed deep learning, which allows a model to be trained on two servers using RDMA connectivity between the memory of the AC922 servers, reducing training time. Finally, Ochoa Gilo showcased the SnapML framework, which allows non-deep learning models to be accelerated by the GPUs, reducing the training time by 4X. He ran live examples that demonstrated its effectiveness right out of the box – some researchers in the room were so impressed by the framework that they implemented it in their clusters before the demonstration ended! + +  + +- - [Thibaud Besson](https://fr.linkedin.com/in/thibaud-besson-3476b42b), IBM POWER Systems: **PowerAI Vision, CAPI and OpenCAPI Interface to FPGA on POWER**: Thibaud Besson returned to explain why PowerAI Vision is a fundamental solution for companies that cannot afford to hire the world’s best data scientists. In a live example, he created a dataset from scratch, ran a training and then put it into production. The dataset was able to be monetized in minutes, offering the usefulness of the model to any software that can make an API REST call. + +  + +To wrap up, Besson explained the usefulness of being an open architecture, diving into CAPI and OpenCAPI and the benefits of using it in I/O intensive workloads. + +AI is a key topic of interest for the University of Reims and its partners as further projects out of the University explore AI in agriculture and viticulture. As such, participants learned more about OpenPOWER and AI, and speakers in return were able to better understand the needs of our local researchers. All in all, the workshop was well-received and highly engaging. Thank you to everyone who participated! diff --git a/content/blog/exploring-the-power-of-new-possibilities.md b/content/blog/exploring-the-power-of-new-possibilities.md new file mode 100644 index 0000000..fc16d9d --- /dev/null +++ b/content/blog/exploring-the-power-of-new-possibilities.md @@ -0,0 +1,28 @@ +--- +title: "Exploring the Power of New Possibilities" +date: "2019-08-19" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "google" + - "summit" + - "wistron" + - "openpower-foundation" + - "red-hat" + - "inspur" + - "hitachi" + - "yadro" + - "raptor" + - "sierra" + - "infographic" +--- + +By Hugh Blemings, Executive Director, OpenPOWER Foundation + +In the six years since its creation, the OpenPOWER Foundation has facilitated our members combining their unique technologies and expertise, and through this enabled some major breakthroughs in modern computing. With more than 350 members from all around the world and from all layers of the hardware/software stack, together we’re opening doors to a new level of open. + +While we kick off OpenPOWER Summit North America today and look ahead to the next frontier, it’s also important to reflect on all that we’ve accomplished to date. Explore some of the milestones in the infographic below! + +![](images/9034_IBMPower_OpenPOWERInfographic_080519.png) diff --git a/content/blog/express-ethernet-technology-solves-for-big-data-variances.md b/content/blog/express-ethernet-technology-solves-for-big-data-variances.md new file mode 100644 index 0000000..7d3f88a --- /dev/null +++ b/content/blog/express-ethernet-technology-solves-for-big-data-variances.md @@ -0,0 +1,41 @@ +--- +title: "Express Ethernet Technology Solves for Big Data Variances" +date: "2019-01-23" +categories: + - "blogs" +tags: + - "featured" +--- + +_Featuring OpenPOWER member: [NEC](https://in.nec.com/)_ + + By: [Deepak Pathania](https://www.linkedin.com/in/deepak-pathania-3aa4a938/)­­­­­, Senior Technical Leader, NEC Technologies India + +I recently had the honor of speaking at the [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/). I spoke alongside more than 40 other developers and researchers on my work with [NEC](https://in.nec.com/). + +My session focused on how at NEC, we explored solutions to common problems of two types of remote capabilities including ubiquitous computing and IoT solutions. Our solution was to extend the PCLe switch for Ethernet and in doing so, we discovered a new way of looking at connecting multiple PCLe devices remotely. + +**The Problem: Variances of Big Data** + +Accelerators allow for real-time results for analytics, however there is a problem with having an interconnect that connects all architects together. This can result in lower accuracy in values. Another part of this problem is the high demand of Big Data. Not only is there a high demand of analyzing this data,  but results are wanted in real-time. + +**The Solution: Express Ethernet Technology** + +Express Ethernet is the PC extension of Ethernet, which removes the PCLe slots out of the computer and extends it over Ethernet. This eliminates performance lag, giving the user two capabilities: distance and switching. Distance allows the user to extend connection over two kilometers and the switching capability allows for alternating between different types of hardware, all without the need to modify existing hardware or software. + +In summation, the Express Ethernet system allows us to have the next generation computer hardware architectures because the system: + +- Allows distance or length with dynamic switching capability +- Provides same or similar performance of local versus remotely located IOs +- Moves within the chassis devices outside with plug and play ability +- Makes legacy devices useful and cost-effective system realization + +To learn more about Express Ethernet technology and the work being done at NEC, view the full video session and presentation below. + +https://www.youtube.com/watch?v=lTaBIhgiNB4 + +  + + + +**[PCI Express switch over Ethernet or Distributed IO Systems for Ubiquitous Computing and IoT Solutions](//www.slideshare.net/ganesannarayanasamy/pci-express-switch-over-ethernet-or-distributed-io-systems-for-ubiquitous-computing-and-iot-solutions "PCI Express switch over Ethernet or Distributed IO Systems for Ubiquitous Computing and IoT Solutions")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)** diff --git a/content/blog/fabric-fpga-cloud.md b/content/blog/fabric-fpga-cloud.md new file mode 100644 index 0000000..1b0928f --- /dev/null +++ b/content/blog/fabric-fpga-cloud.md @@ -0,0 +1,38 @@ +--- +title: "Managing Reconfigurable FPGA Acceleration in a POWER8-based Cloud with FAbRIC" +date: "2016-05-06" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Xiaoyu Ma, PhD Candidate, University of Texas at Austin_ + +_This post is the first in a series profiling the work developers are doing on the OpenPOWER platform. We will be posting more from OpenPOWER developers as we continue our [OpenPOWER Developer Challenge](http://openpower.devpost.com)._ + +![tacc](images/tacc.png) + +FPGAs (Field-Programmable Gate Array) are becoming prevalent. Top hardware and software vendors have started making it a standard to incorporate FPGAs into their compute platforms for performance and power benefits. IBM POWER8 delivers CAPI (Coherent Accelerator Processor Interface) to enable FPGA devices to be coherently attached on the PCIe bus. Industries from banking and finance, retail, [healthcare](https://openpowerfoundation.org/blogs/genomics-with-apache-spark/) and many other fields are exploring the benefits of [FPGA-based acceleration](https://openpowerfoundation.org/blogs/capi-drives-business-performance/) on the OpenPOWER platform. + +## FPGAs in the Cloud + +When it comes to cloud compute, in-cloud FPGAs are appealing due to the combined benefits of both FPGAs and clouds. On one hand, FPGAs improve cloud performance and save power by orders of magnitude. On the other hand, the cloud infrastructure reduces cost per compute by resource sharing and large-scale FPGA system access without the user needing to own and manage the system. Furthermore, cloud enables a new level of collaboration as the identical underlying infrastructure makes it easier for users of the same cloud to share their work, to verify research ideas, and to compare experimental results. + +While clouds with FPGAs are available in companies like IBM, there are, however, few FPGA clouds available for public, especially academic, use. To target this problem, we created [FAbRIC](https://wikis.utexas.edu/display/fabric/Home) (FPGA Research Infrastructure Cloud) a project led by Derek Chiou at The Unviersity of Texas at Austin. It enables FPGA research and development on large-scale systems by providing FPGA systems, tools, and servers to run tools in a cloud environment. Currently all FAbRIC clusters are equipped with reconfigurable fabric to run FPGA accelerated workloads. To be available for open use, FAbRIC systems are placed in the [Texas Advanced Computing Center](https://www.tacc.utexas.edu/systems/fabric) (TACC), the supercomputer center of The University of Texas at Austin. + +![FaBRIC post](images/FaBRIC-post-1.jpg) + +## Using FPGAs with FAbRIC + +The FAbRIC POWER8+CAPI system (Figure A) is a cluster of several x86 servers and nine POWER8 servers. The x86 nodes serve as the gateway node, the file server and build machines for running FPGA tools. Each POWER8 node is a heterogeneous compute platform equipped with three accelerating devices (Figure b): a Nallatech 385 A7 Stratix V FPGA adapter, an Alpha-data 7V3 Virtex7 Xilinx-based FPGA adapter and a NVIDIA Tesla K40m GPGPU card. FPGA boards are CAPI-enabled to provide coherent shared memory between the processor and accelerators. + +To use FPGA accelerators on POWER8 nodes, the user will design the FPGA accelerator source code typically in RTL such as Verilog or VHDL, push it through the FPGA compiler, program the FPGA with the generated FPGA configuration image and run with host programs. In addition to the conventional RTL design flow which has low programmability, Bluespec System Verilog and High-level Synthesis flows including OpenCL and Xilinx Vivado C-to-Gate are offered as alternatives to RTL in the synthesis of FPGA accelerators. Such flows allow users to abstract away the traditional hardware FPGA development flow for a higher level software development flow and therefore reduce the FPGA accelerator design cycle. + +## Weaving FAbRIC + +After months of work to ensure in-cloud FPGAs are manageable, which we discovered to be nontrivial since opening close to the metal access with reconfigurability creates vulnerabilities, FAbRIC POWER8+CAPI is up and available to the public research community upon request. Our early “family and friend” users have been running real-world applications reliably and generating promising results for their research projects. As another use case of the system, IBM will launch a CAPI design contest in the late spring of 2016. + +* * * + +_About Xiaoyu Ma Xiaoyu Ma is a PHD candidate of the Department of Electrical and Computer Engineering at The University of Texas at Austin. He is advised by Prof. Derek Chiou. His research areas include FPGA-based hardware specialization, hardware design programming models, FPGA cloud infrastructure and microprocessor architecture. He is also an employee of the Large Scale System group at Texas Advanced Computing Center, serving as the lead system administrator for the FPGA Research Infrastructure Cloud (FAbRIC) project._ diff --git a/content/blog/final-draft-of-the-power-isa-eula-released.md b/content/blog/final-draft-of-the-power-isa-eula-released.md new file mode 100644 index 0000000..aa68451 --- /dev/null +++ b/content/blog/final-draft-of-the-power-isa-eula-released.md @@ -0,0 +1,105 @@ +--- +title: "Final Draft of the Power ISA EULA Released" +date: "2020-02-13" +categories: + - "blogs" +tags: + - "ibm" + - "power-isa" + - "microwatt" + - "eula" + - "chiselwatt" + - "end-user-license-agreement" +--- + +**By: Hugh Blemings** + +On August 20, 2019 the OpenPOWER Foundation, along with IBM, announced that the POWER ISA was to be released under an open license. You can read more about it in [previous posts](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) but the short story is that anyone is now free to build their own POWER ISA compliant chips, ASICs, FPGAs etc. without paying a royalty and with a “pass through” patent license from IBM for anything that pertains to the ISA itself.  On top of this of course an ability to contribute to the ISA as well through a Workgroup we’re standing up within the OpenPOWER Foundation. + +Microwatt and Chiselwatt are just two examples of implementations that come under this license and there are rumblings about some others, including credible discussions around SoCs based on the ISA. Exciting times ahead! + +We’ve had some questions about what the actual End User License Agreement (EULA) will look like and we’re pleased to present a final draft of it below.  If you’ve questions or feedback please do get in touch. The details of the associated Workgroup is being finalised by the board, more to follow on that too. :) + +## **FINAL DRAFT - Power ISA End User License Agreement - FINAL DRAFT** + +“Power ISA” means the microprocessor instruction set architecture specification version provided to you with this document. By exercising any right under this End User License Agreement, you (“Recipient”) agree to be bound by the terms and conditions of this Power ISA End User License (“Agreement”). + +All information contained in the Power ISA is subject to change without notice. The products described in the Power ISA are NOT intended for use in applications such as implantation, life support, or other hazardous uses where malfunction could result in death, bodily injury, or catastrophic property damage. + +**Definitions** + +“Architectural Resources” means assignable resources necessary for elements of the Power ISA to interoperate, including, but not limited to: opcodes, special purpose registers, defined registers, reserved bits in existing defined registers, control table fields and bits, and interrupt vectors. + +“Compliancy Subset” means a portion of the Power ISA, defined within the Power ISA, which must be implemented to ensure software compatibility across Power ISA compliant devices. + +“Contribution” means any work of authorship that is intentionally submitted to OPF for inclusion in the Power ISA by the copyright owner or by an individual or entity authorized to submit on behalf of the copyright owner. Without limiting the generality of the preceding sentence, RFCs will be considered Contributions. + +“Custom Extensions” means additions to the Power ISA in a designated subset of Architectural Resources defined by the Power ISA. For clarity, Custom Extensions are not Contributions. + +"Integrated Circuit" shall mean an integral unit including a plurality of active and passive circuit elements formed at least in part of semiconductor material arranged on or in a chip(s) or substrate. + +“OPF” means The OpenPOWER Foundation. + +“Licensed Patent Claims” means patent claims: + +(a) licensable by or through OPF; and + +(b) which, but for this Agreement, would be necessarily infringed by the use of the Power ISA in making, using, or otherwise implementing a Power Compliant Chip. + +“Party(ies)” means Recipient or OPF or both. + +“OpenPOWER ISA Compliance Definition” means the validation procedures associated with architectural compliance developed, delivered, and maintained by OPF as specified in the following link: [https://openpowerfoundation.org/?resource\_lib=openpower-isa-compliance-definition](https://openpowerfoundation.org/?resource_lib=openpower-isa-compliance-definition). + +“Power Compliant” means an implementation of (i) one of the Compliancy Subsets alone or (ii) one of the Compliancy Subsets together with selected permitted combinations of additional instructions and/or facilities within the Power ISA, in the case of clauses (i) and (ii), provided that such implementation meets the corresponding portions of the OpenPOWER ISA Compliance Definition. + +“Power ISA Core” means an implementation of the Power ISA that is represented by software, a hardware description language (HDL), or an Integrated Circuit design, but excluding physically implemented chips  (such as microprocessors, system on a chips, or field-programmable gate arrays FPGAs)); provided that such implementation is primarily designed to be included as part of software, a hardware description language (HDL), or an Integrated Circuit design that are in each case Power Compliant, regardless of whether such implementation, independently, is Power Compliant. + +“Power Compliant Chip” means a Power Compliant physical implementation of one or more Power ISA Cores into one or more Integrated Circuits, including, for example, in a microprocessor, system on a chip, or a field-programmable gate array (FPGA), provided that all portions of such physical implementation are Power Compliant. + +“Request for Change (RFC)” means any request for change in the Power ISA as a whole, or a change in the definition of a Compliancy Subset provided in the Power ISA. + +1. **Grant of Rights** + +Solely for the purposes of developing and expanding the Power ISA and the POWER ecosystem, and subject to the terms of this Agreement: + +1.1 OPF grants to Recipient a nonexclusive, worldwide, perpetual, royalty-free, non-transferable license under all copyrights licensable by OPF and contained in the Power ISA to a) develop technology products compatible with the Power ISA, and b) create, use, reproduce, perform, display, and distribute Power ISA Cores. + +1.2 OPF grants to Recipient the right to license Recipient Power ISA Cores under the Creative Commons Attribution 4.0 license. + +1.3 OPF grants to Recipient the right to sell or license Recipient Power ISA Cores under independent terms that are consistent with the rights and licenses granted under this Agreement. As a condition of the license grant under this section 1.3, the Recipient must either provide the Power ISA with this Agreement to the downstream recipient, or provide notification for the downstream recipient to obtain the Power ISA and this Agreement to have appropriate patent licenses to implement the Power ISA Core as a Power Compliant Chip. It is clarified that no rights are to be granted under this Section 1.3 beyond what is expressly permitted by this Agreement. + +1.4 Notwithstanding Sections 1.1 through 1.3 above, Recipient shall not have the right or license to create, use, reproduce, perform, display, distribute, sell, or license the Power ISA Core in a physically implemented chip (including a microprocessor, system on a chip, or a field-programmable gate array (FPGA)) that is not Power Compliant, nor to license others to do so. + +1.5 OPF grants to Recipient a nonexclusive, worldwide, perpetual, royalty-free, non-transferable license under Licensed Patent Claims to make, use, import, export, sell, offer for sale, and distribute Power Compliant Chips. + +1.6 If Recipient institutes patent litigation or an administrative proceeding (including a cross-claim or counterclaim in a lawsuit, or a United States International Trade Commission proceeding) against OPF, OPF members, or any third party entity (including but not limited to any third party that made a Contribution) alleging infringement of any Recipient patent by any version of the Power ISA, or the implementation thereof in a CPU design, IP core, or chip, then all rights, covenants, and licenses granted by OPF to Recipient under this Agreement shall terminate as of the date such litigation or proceeding is initiated. + +1.7 Without limiting any other rights or remedies of OPF, if Recipient materially breaches the terms of this Agreement, OPF may terminate this Agreement at its discretion. + +2. **Modifications to the Power ISA** + +2.1 Recipient shall have the right to submit Contributions to the Power ISA through a prospectively authorized process by OPF, but shall not implement such Contributions until fully approved through the prospectively authorized OPF process. + +2.2 Recipient may create Custom Extensions as described and permitted in the Power ISA. Recipient is encouraged, but not required, to bring their Custom Extensions through the authorized OPF process for contributions. For clarity, Custom Extensions cannot be guaranteed to be compatible with another third party’s Custom Extensions. + +3. **Ownership** + +3.1 Nothing in this Agreement shall be deemed to transfer to Recipient any ownership interest in any intellectual property of OPF or of any contributor to the Power ISA, including but not limited to any copyrights, trade secrets, know-how, trademarks associated with the Power ISA or any patents, registrations or applications for protection of such intellectual property. + +3.2 Recipient retains ownership of all incremental work done by Recipient to create Power ISA Cores and Power Compliant Chips, subject to the ownership rights of OPF and any contributors to the Power ISA. Nothing in this Agreement shall be deemed to transfer to OPF any ownership interest in any intellectual property of Recipient, including but not limited to any copyrights, trade secrets, know-how, trademarks, patents, registrations or applications for protection of such intellectual property. + +4. **Limitation of Liability** + +4.1 THE POWER ISA AND ANY OTHER INFORMATION CONTAINED IN OR PROVIDED UNDER THIS DOCUMENT ARE PROVIDED ON AN “AS IS” BASIS. OPF makes no representations or warranties, either express or implied, including but not limited to, warranties of merchantability, fitness for a particular purpose, or non-infringement, or that any practice or implementation of the Power ISA or other OPF documentation will not infringe any third party patents, copyrights, trade secrets, or other rights. In no event will OPF or any other person or entity submitting any Contribution to OPF be liable for damages arising directly or indirectly from any use of the Power ISA or any other information contained in or provided under this document. + +5. **Compliance with Law** + +5.1 Recipient shall be responsible for compliance with all applicable laws, regulations and ordinances, and will obtain all necessary permits and authorizations applicable to the future conduct of its business involving the Power ISA. Recipient agrees to comply with all applicable international trade laws and regulations such as export controls, embargo/sanctions, antiboycott, and customs laws related to the future conduct of the business involving the Power ISA to be transferred under this Agreement. Recipient warrants that it is knowledgeable with, and will remain in full compliance with, all applicable export controls and embargo/sanctions laws, regulations or rules, orders, and policies, including but not limited to, the U.S. International Traffic in Arms Regulations (“ITAR”), the U.S. Export Administration Regulations (“EAR”), and the regulations of the Office of Foreign Assets Control (“OFAC”), U.S. Department of Treasury. + +6. **Choice of Law** + +6.1 This Agreement is governed by the laws of the State of New York, without regard to the conflict of law provisions thereof. + +7. **Publicity** + +7.1 Nothing contained in these terms shall be construed as conferring any right to use in advertising, publicity or other promotional activities any name, trade name, trademark or other designation of any Party hereto (including any contraction, abbreviation or simulation of any of the foregoing). diff --git a/content/blog/fpga-acceleration-in-a-power8-cloud.md b/content/blog/fpga-acceleration-in-a-power8-cloud.md new file mode 100644 index 0000000..4d1cb99 --- /dev/null +++ b/content/blog/fpga-acceleration-in-a-power8-cloud.md @@ -0,0 +1,26 @@ +--- +title: "FPGA Acceleration in a Power8 Cloud" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Abstract + +OpenStack is one of the popular software that people use to run a cloud. It managers hardware resources like memory, disks, X86 and POWER processors and then provide IaaS to users. Based on existing OpenStack, more kinds of hardware resource can also be managed by OpenStack and be provided to users, like GPU and FPGA. FPGA has been widely used for many kinds of applications, and POWER8 processor has integrated an innovated interface called CAPI (Coherent Accelerator Processor Interface) for direct connection between FPGA and POWER8 chip. CAPI not only provides low latency, high bandwidth and cache coherent interconnection between user’s accelerator hardware and the application software, but also provides an easy programming capability for both accelerator hardware developers and software developers. Based on such features, we extend the OpenStack to make the cloud users can remotely use the POWER8 machine with FPGA acceleration. + +Our work allows the cloud users uploading their accelerator design to an automatically compiling service, and then their accelerators will be automatically deployed into a customized OpenStack cloud with POWER8 machine and FPGA card. When the cloud users launch some virtual machines (VMs) in this cloud, their accelerators can be attached to their VMs so that inside these VMs, they can use their accelerators for their applications. Like the operating system images in cloud, the accelerators can also be shared or sold in the whole cloud so that one user’s accelerator can benefit other users. + +By enabling CAPI in the cloud, our work lowers the threshold of using FPGA acceleration, encourages people using accelerators for their application and sharing accelerators to all cloud users. The CAPI and FPGA acceleration ecosystem also benefits from this way. A public cloud with our work is in testing. It is used by some students in university. Remote accessing to the cloud is enabled, so that live demo can be shown when in the presentation. + +### Bio + +Fei Chen works for IBM China Research Lab in major of cloud and big data. He achieved his B.S. degree in Tsinghua University, China and got his Ph.D. degree in Institute of Computing Technology, Chinese Academy of Sciences in the year 2011. He worked on hardware design for many years, and now focuses on integrating heterogeneous computing resource into cloud. Organization: IBM China Research Lab (CRL) + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Chen-Fei_OPFS2015_IBM_031315_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/genome-folding-with-power8.md b/content/blog/genome-folding-with-power8.md new file mode 100644 index 0000000..be56d76 --- /dev/null +++ b/content/blog/genome-folding-with-power8.md @@ -0,0 +1,38 @@ +--- +title: "Genome Folding and POWER8: Accelerating Insight and Discovery in Medical Research" +date: "2015-11-16" +categories: + - "blogs" +tags: + - "gpu" + - "genomics" + - "healthcare" +--- + +_By Richard Talbot, Director - Big Data, Analytics and Cloud Infrastructure_ + +No doubt, the words “surgery” and “human genome” rarely appear in the same sentence. Yet that’s what a team of research scientists in the Texas Medical Center announced recently --- a new procedure designed to modify how a human genome is arranged in the nucleus of a cell in three dimensions, with extraordinary precision. Picture folding a genome almost as easily as a piece of paper. + +\[caption id="attachment\_2151" align="aligncenter" width="625"\][![An artist’s interpretation of chromatin folded up inside the nucleus. The artist has rendered an extraordinarly long contour into a small area, in two dimensions, by hand. Credit: Mary Ellen Scherl.](images/Artist_Interpretation_4_Credit_MaryEllenScherl-1021x1024.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/11/Artist_Interpretation_4_Credit_MaryEllenScherl.jpg) An artist’s interpretation of chromatin folded up inside the nucleus. The artist has rendered an extraordinarly long contour into a small area, in two dimensions, by hand. Credit: Mary Ellen Scherl.\[/caption\] + +This achievement, which appeared recently in the [Proceedings of the National Academy of Sciences](http://www.pnas.org/), was driven by a team of researchers led by Erez Lieberman Aiden, a geneticist and computer scientist with appointments at the Baylor College of Medicine and Rice University in Houston, and his students Adrian Sanborn and Suhas Rao. The news spread quickly across a broad range of major news sites. Because genome folding is thought to be associated with many life-altering diseases, the implications are profound. Erez said, “This work demonstrates that it is possible to modify how a genome is folded by altering a handful of genetic letters, without disturbing the surrounding DNA.” + +Lurking just beneath the surface, this announcement represents a major computational achievement also. Erez and his team have been using IBM’s new POWER8 scale-out systems packed with NVIDIA Tesla K40 GPU accelerators to build a 3-D visualization of the human genome and model the reaction of the genome to this surgical procedure. + +https://www.youtube.com/watch?v=Tn5qgEqWgW8 + +The total length of the human genome is over 3 billion base pairs (a typical measure of the size of a human or mammalian genome) and the data required to analyze a single person’s genome can easily exceed a terabyte: enough to fill a stack of CDs that is 40 feet tall. Thus, the computational requirement behind this achievement is a grand challenge of its own. + +POWER8 memory bandwidth and the high octane computational horsepower of the NVIDIA Tesla Accelerated Computing Platform enabled the team to run applications that aren’t feasible on industry standard systems. Aiden said that the discoveries were possible, in part, because these systems enabled his team to analyze far more 3-D folding data than they could before. + +This high performance cluster of IBM POWER8 systems, codenamed “PowerOmics”, was installed at Rice University in 2014 and made available to Rice faculty, students and collaborative research programs in the Texas Medical Center. The name “PowerOmics” was selected to portray the Life Sciences research mission of this high performance compute and storage resource for the study of large-scale, data-rich life sciences --- such as genomics, proteomics and epigenomics. This high performance research computing infrastructure was made possible by a collaboration with OpenPOWER Foundation members Rice University, IBM, NVIDIA and Mellanox. + +* * * + +For more information: + +- Baylor College of Medicine, Press Release October 19, 2015: [Team at Baylor successfully performs surgery on a human genome, changing how it is folded inside the cell nucleus](https://www.bcm.edu/news/molecular-and-human-genetics/changing-how-human-genome-folds-in-nucleus) +- Rice University, Press Release October 19, 2015: [Gene on-off switch works like backpack strap](http://news.rice.edu/2015/10/19/gene-on-off-switch-works-like-backpack-strap-2/) +- Time, Oct. 19, 2015: [Researcher Perform First Genome Surgery](http://time.com/4078582/surgery-human-genome/) + +* * * diff --git a/content/blog/genomics-with-apache-spark.md b/content/blog/genomics-with-apache-spark.md new file mode 100644 index 0000000..6244ea3 --- /dev/null +++ b/content/blog/genomics-with-apache-spark.md @@ -0,0 +1,47 @@ +--- +title: "Delft University Analyzes Genomics with Apache Spark and OpenPOWER" +date: "2015-12-14" +categories: + - "blogs" +tags: + - "openpower" + - "power8" + - "genomics" + - "spark" +--- + +_By Zaid Al-Ars, Cofounder, Bluebee, Chair of the OpenPOWER Foundation Personalized Medicine Working Group, and Assistant Professor at Delft University of Technology_ + +The collaboration between the Computer Engineering Lab of the Delft University of Technology (TUDelft) and the IBM Austin Research Lab (ARL) started two years ago. Peter Hofstee invited me for a sabbatical visit to ARL to collaborate on big data challenges in the field of genomics and to investigate areas of common interest to work on together. The genomics field poses a number of challenges for high-performance computing systems and requires architectural optimizations to various subsystem components to effectively run the algorithms used in this field. Examples of such required architectural optimizations are: + +- Optimizations to the I/O subsystem, due to the large data file sizes that need to be accessed repetitively +- Optimizations to the memory subsystem, due to the in-memory processing requirements of genomics applications +- Optimizations to the scalability of the algorithms to utilize the available processing capacity of a cluster infrastructure. + +To address these requirements, we set out to implement such genomics algorithms using a scalable big data framework that is capable of performing in-memory computation on a high performance cluster with optimized I/O subsystem. + +\[caption id="attachment\_2183" align="aligncenter" width="625"\][![Frank Liu and Zaid Al-Ars stand next to the ten-node POWER8 cluster running their tests](images/Delft-1-768x1024.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/Delft-1.jpg) Frank Liu and Zaid Al-Ars stand next to the ten-node POWER8 cluster running their tests\[/caption\] + +## Sparking the Innov8 with POWER8 University Challenge + +From this starting point, we had the idea of building a high-performance system for genomics applications and enter it in the [Innov8 with POWER8 University Challenge](http://www-03.ibm.com/systems/power/education/academic/university-challenge.html?cmp=IBMSocial&ct=C3970CMW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us). In the process, the TUDelft would bring together various OpenPOWER technologies developed by IBM, Xilinx, Bluebee and others to create a solution for a computational challenge that has a direct impact in healthcare for cancer diagnostics as well as a scientific impact on genomics research in general. We selected Apache Spark as our big data software stack of choice, due to its scalable in-memory computing capabilities, and the easy integration it offers to a number of big data storage systems and programming APIs. However, a lot of work was needed in order to realize this solution, both related to the practicalities of installing and running Apache Spark on Power systems, something which has not yet been done at the time, as well as building the big data framework for genomics applications. + +The first breakthrough came a couple of months after my sabbatical, when Tom Hubregtsen (a TUDelft student back then, working on his MSc thesis within ARL) was able to setup and run an Apache Spark implementation on a POWER8 system, by modifying and rewriting a whole host of libraries and middleware components in the software stack. Tom worked hard to achieve this important feat as a stepping-stone to his actual work on integrating Flash-based storage into the Spark software stack. He focused on CAPI connected Flash, and modified Apache Spark to spill intermediate data directly to the Flash system. The results were very promising, showing up to 70% reduction in the overhead as a result of the direct data spilling. + +Building on Tom’s work, Hamid Mushtaq (a researcher in the TUDelft) successfully ran Spark on a five-node IBM Power cluster owned by the TUDelft. Hamid then continued to create a Spark-based big data framework that enables segmentation of the large data volumes used in the analysis, and enables transparent distribution of the analysis on a scalable cluster. He also made use of the in-memory computation capabilities of Spark to enable dynamic load balancing across the cluster, depending on the processing requirements of the input files. This enables efficient utilization of the available computation resources in the compute cluster. Results show that we can reduce the compute time of well-known pipelines by more than an order of magnitude, reducing the execution time from hours to minutes. This implementation is now being ported by Frank Liu at ARL on a ten-node POWER8 cluster to check for further scalability and optimization potential. + +\[caption id="attachment\_2184" align="aligncenter" width="625"\][![Left to right: Hamid Mushtaq, Sofia Danko and Daniel Molnar](images/Delft-2-1024x683.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/Delft-2.jpg) Left to right: Hamid Mushtaq, Sofia Danko and Daniel Molnar\[/caption\] + +## FPGA Acceleration + +Keeping in mind the high computational requirements of the various genomics algorithms used, as well as the available parallelism in these algorithms, we identified early on the benefits of using FPGA acceleration approaches to improve the performance even further. However, it is rather challenging to use hardware acceleration in combination with Spark, something that has not yet been shown to work on any system so far, mainly because of the difficulty of integrating FPGAs into the Java-based Spark software stack. Daniel Molnar (an internship student at the TUDelft) took up this challenge and within a short amount of time was able to write native functions that connect Spark through the Java Native Interface (JNI) to FPGA hardware accelerators for specific kernels. These kernels are now being integrated and evaluated for their system requirements and the speedup they can achieve. + +## Improving Genomics Data Compression + +Further improvements to the genomics scalable Spark pipeline are being investigated by Sofia Danko (a TUDelft PhD student), who is looking at the accuracy of the analysis on Power and proposing approaches to ensure high-quality output that can be used in a clinical environment. She is also investigating state-of-the-art genomics data compression techniques to facilitate low-cost storage and transport of DNA information. Initial results of her analysis show that specialized compression techniques can reduce the size of genomics input files to a fraction of the original size, achieving compression ratios as low as 16%. + +We are excited to be part of the Innov8 university challenge. Innov8 helps the students to work as a team with shared objectives, and motivates them to achieve rather ambitious goals that have relevant societal impact they can be proud of. The team is still working to improve the results of the project, by increasing both the performance as well as the quality of the output. We are also looking forward to present our project in the IBM InterConnect 2016 conference, and to compete with other world-class universities participating in the Innov8 university challenge + +* * * + +[![zaid](images/zaid-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/zaid.jpg)_Zaid Al-Ars is cofounder of Bluebee, where he leads the development of the Bluebee genomics solutions. Zaid is also an assistant professor at the Computer Engineering Lab of Delft University of Technology, where he leads the research and education activities of the multi/many-core research theme of the lab. Zaid is involved in groundbreaking genomics research projects such as the optimized child cancer diagnostics pipeline with University Medical Center Utrecht and de novo DNA assembly research projects of novel organisms with Leiden University._ diff --git a/content/blog/gnu-compiler-collection-gcc-linux-power.md b/content/blog/gnu-compiler-collection-gcc-linux-power.md new file mode 100644 index 0000000..b5fc3ad --- /dev/null +++ b/content/blog/gnu-compiler-collection-gcc-linux-power.md @@ -0,0 +1,24 @@ +--- +title: "GNU Compiler Collection (GCC) for Linux on Power" +date: "2018-07-12" +categories: + - "blogs" +tags: + - "featured" +--- + +_[This article was originally published by IBM](https://developer.ibm.com/linuxonpower/2018/06/28/gnu-compiler-collection-gcc-linux-power/)_. + +By [Bill Schmidt](https://developer.ibm.com/linuxonpower/author/wschmidt-2/) + +The GNU Compiler Collection (GCC) is the standard set of compilers shipped with all Enterprise Linux distributions. IBM’s Linux on Power Toolchain team supports GCC for Linux on Power, providing enablement and exploitation of new features for each processor generation, and improved code generation for better performance. GCC includes a C compiler (gcc), a C++ compiler (g++), a Fortran compiler (gfortran), a Go compiler (gccgo), and several others. + +Because Linux distributors build all of their packages with the same GCC compilers that they ship, for stability reasons GCC is not updated to new versions over time on enterprise distributions. Thus it is often the case that the default GCC on a system is too old to support all features for the most modern processors. It is highly recommended that you use as recent a version of GCC as possible for compiling production quality code. + +One way to obtain the most recent compilers (and libraries) is to install the [IBM Advance Toolchain](https://developer.ibm.com/linuxonpower/advance-toolchain/). A new version of the Advance Toolchain is released each August, based upon the most recent GCC compilers and core system libraries available. The Advance Toolchain is free to download, and is fully supported through IBM’s Support Line for Linux Offerings. IBM often includes additional optimizations in the Advance Toolchain that were not completed in time for the base release. + +If you are a do-it-yourselfer, you can also download the source for the most recent official GCC releases from the Free Software Foundation’s website. A list of releases, and a link to the mirror sites from which the code can be downloaded, can be found here: [https://gcc.gnu.org/releases.html](https://gcc.gnu.org/releases.html) Instructions for installing the software can be found here: [https://gcc.gnu.org/install/](https://gcc.gnu.org/install/) A sample configuration command for compilers that will generate POWER8 code is available from [GCC for Linux on Power user community](https://developer.ibm.com/linuxonpower/compilers-linux-power/gnu-compiler-collection-gcc/). + +Advice for compiler options for the best performance may be found here: [https://developer.ibm.com/linuxonpower/compiler-options-table/](https://developer.ibm.com/linuxonpower/compiler-options-table/) + +Welcome to the [GCC for Linux on Power user community](https://developer.ibm.com/linuxonpower/compilers-linux-power/gnu-compiler-collection-gcc/)! diff --git a/content/blog/google-rackspace-gpus-openpower-summit.md b/content/blog/google-rackspace-gpus-openpower-summit.md new file mode 100644 index 0000000..86ce7ce --- /dev/null +++ b/content/blog/google-rackspace-gpus-openpower-summit.md @@ -0,0 +1,16 @@ +--- +title: "Google, Rackspace, and GPUs: OH MY! See what you missed at OpenPOWER Summit" +date: "2016-04-11" +categories: + - "blogs" +tags: + - "featured" +--- + +What has over 50 new hardware reveals, collaboration from over 200 members like Google, Rackspace, IBM, and NVIDIA, and made headlines around the world? That's right: OpenPOWER Summit 2016! + +Check out our Slideshare below to see some of the great content, quotes from industry leaders, and announcements that you missed. + + + +**[OpenPOWER Summit Day 2 Recap](//www.slideshare.net/OpenPOWERorg/openpower-summit-day-2-recap "OpenPOWER Summit Day 2 Recap")** from **[OpenPOWERorg](//www.slideshare.net/OpenPOWERorg)** diff --git a/content/blog/google-shows-off-hardware-design-using-ibm-chips.md b/content/blog/google-shows-off-hardware-design-using-ibm-chips.md new file mode 100644 index 0000000..2604039 --- /dev/null +++ b/content/blog/google-shows-off-hardware-design-using-ibm-chips.md @@ -0,0 +1,8 @@ +--- +title: "Google Shows Off Hardware Design Using IBM Chips" +date: "2014-04-28" +categories: + - "blogs" +--- + +It’s no secret that IBM wants to move its technology into the kind of data centers that Google GOOGL \-0.47% and other Web giants operate. Now comes evidence that Google is putting some serious work into that possibility. diff --git a/content/blog/high-performance-secondary-analysis-sequencing-data.md b/content/blog/high-performance-secondary-analysis-sequencing-data.md new file mode 100644 index 0000000..0147f98 --- /dev/null +++ b/content/blog/high-performance-secondary-analysis-sequencing-data.md @@ -0,0 +1,61 @@ +--- +title: "High Performance Secondary Analysis of Sequencing Data" +date: "2018-11-13" +categories: + - "blogs" +tags: + - "featured" +--- + +Genomic analysis is on the cusp of revolutionizing the understanding of diseases and the methods for their treatment and prevention. With the advancements in Next Generation Sequencing (NGS) technologies, the number of human genomes sequenced is predicted to double every year. This market growth is further fueled by the ongoing transition of NGS into the clinical market where it is enabling personalized medicine, that promises to transform the diagnosis and treatment of diseases, leading to a disruptive change in modern medicine. + +However, current DNA analysis is restricted to using limited data due to the large time and cost for Whole Genome Sequencing (WGS). As biochemical sequencing is getting faster and cheaper, the bottleneck is the analysis of the large volumes of data generated by these technologies. Faster and cheaper computational processing is required to make genomic analysis available for the masses. Furthermore, pharmaceutical companies, consumer genomic companies, and research centers are currently processing hundreds of thousands of genomes with great cost and will hugely benefit from this improvement as well. + +Parabricks brings high performance computing technologies that are tailored for NGS analyses and accelerates the standard NGS software from several days to approximately one hour. The accelerated software is a drop-in replacement of existing tools that does not sacrifice output accuracy or configurability. Parabricks provides 30-36 times faster secondary analysis of FASTQ files coming out of sequencer to variant call files (VCFs) for tertiary analysis on Power 9 servers. The standard pipeline shown below consists of three steps and are defined as the Genome Analysis Toolkit (GATK). Parabricks accelerates existing GATK 4 best practices to generate equivalent results as the baseline. The image below shows the pipeline currently supported by Parabricks. + +\[caption id="attachment\_5912" align="aligncenter" width="757"\][![](images/Parabricks.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/Parabricks.png) Figure 1 - Parabricks GPU accelerated pipeline\[/caption\] + +## **Power Hardware Configuration** + +The Power System AC922 server is co-designed with OpenPOWER Foundation ecosystem members for the demanding needs of deep learning and AI, high-performance analytics, and high-performance computing users. It is deployed in the most powerful supercomputers on the planet through a partnership between IBM, NVIDIA, and Mellanox, among others. + +The IBM AC922 Server is an accelerator optimized server with support for four NVIDIA Tesla V100 GPUs connected via NVLINK 2.0 to the POWER9 CPU’s at 150GBs speed each GPU. The hardware and system software configurations are summarized below. + +
Server

IBM AC922 (8335-GTH)

Processor40-core at 2.4 GHz (3.0 GHz turbo) IBM POWER9 NVLink 2.0 technology,
4x SMT
Memory·        512 GB DDR4 (8 Channels) - supporting up to 2 TB of memory
GPU4x NVIDIA V100-16GB HBM2, SMX2
+ +_Table 1 - Hardware configuration_ + +## **Performance Evaluation** + +Secondary analysis of genomic data on CPUs has been known to take a long time. 30x WGS data can take upto 30-40 hours for running the pipeline shown before using HaplotypeCaller for variant calling. Below, the raw run times in minutes for the Parabricks software on a Power9 server for 3 DNA samples with different coverages including NA12878. + +
BenchmarkCoverageCPU only
(minutes)
BWA-MemOthers*HaplotypeCallerTotal Time
(minutes)
Speedup
S225x2,74656.814.6513.284.532.4
NA1287843x312562.714.111.588.335.39
NIST 1287841x299361.0514.9513.7189.7133.96
+ +_Table 2 - Others include Co-ordinate sorting, marking duplicates, bqsr and applybqsr._ + +## **Accuracy Evaluation** + +The accuracy of Parabricks solution compared to GATK4 solution is done at two steps: + +i) BAM after Marking Duplicates + +ii) VCF after calling variants + +Parabricks generates 100% equivalent BAM as compared to the CPU only solution and has over 99.99% concordance with CPU vcf. + +
BenchmarkCoverageBAMVCF
S225x100%99.998%
NA1287843x100%99.996%
NIST 1287841x100%99.996%
+ +_Table 3_ + +## **Features of Parabricks software** + +- **30-35 times faster analysis:** Compared to a CPU-only solution, Parabricks accelerates secondary analysis by orders of magnitude. +- **100% Deterministic and Reproducible**: Parabricks software regardless of platform and number/type of resources generates the exact same results every execution. +- **Equivalent Results**: Parabricks’ pipeline generates equivalent results as the reference Broad Institute GATK 4 best practices pipeline as the same algorithm is used. +- **Up to Date Support of All Tool Versions**: Parabricks’ accelerated software supports multiple versions of BWA-Mem, Picard and GATK and will support all future versions of these tools. +- **Visualization**: Parabricks generates several key visualizations real-time, while performing secondary analysis that can improve the user’s understanding of the data. +- **Single Node Execution**: The entire pipeline is run using one computing node and does not incur any overhead of distributing data and work across multiple servers. +- **Turnkey Solution**: Parabricks software runs on standard CPU and GPU nodes available on the cloud or on-premise, and requires no additional setup steps by the user. +- **On-Premise and Cloud:** Parabricks software can run on local servers, AWS, Google Cloud, and Azure. + +Please contact [info@parabricks.com](mailto:info@parabricks.com) for further inquiries. diff --git a/content/blog/how-my-daughter-trained-an-artificial-intelligence-model.md b/content/blog/how-my-daughter-trained-an-artificial-intelligence-model.md new file mode 100644 index 0000000..dc0d397 --- /dev/null +++ b/content/blog/how-my-daughter-trained-an-artificial-intelligence-model.md @@ -0,0 +1,54 @@ +--- +title: "How My Daughter Trained an Artificial Intelligence Model" +date: "2019-12-11" +categories: + - "blogs" +tags: + - "ibm" + - "nvidia" + - "artificial-intelligence" + - "ai" + - "power9" + - "ibm-power-systems" + - "powerai" + - "david-spurway" + - "oxford-cancer-biomarkers" +--- + +_\*This article was originally published by David Spurway on LinkedIn.\*_ + +David Spurway, IBM Power Systems CTO, UK & Ireland, IBM + +**OpenPOWER Foundation and PowerAI make AI accessible to all** + +AI is the most buzz-worthy technology today, with [applications ranging](https://www.techworld.com/picture-gallery/tech-innovation/weirdest-uses-of-ai-strange-uses-of-ai-3677707/) from creating TV news anchors to creating new perfumes. At IBM, we have been focused on this topic for a long time. In 1959, we [demonstrated a computer winning at checkers](https://www.ibm.com/ibm/history/ibm100/us/en/icons/ibm700series/impacts/), which was a milestone in AI. The company then built [Deep Blue](https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/) in 1997, a machine that beat the world chess champion. More recently, IBM released [Watson](https://www.ibm.com/watson) - you may have heard of it playing [Jeopardy](https://www.youtube.com/watch?v=P18EdAKuC1U) or [powering The Weather App](https://www.ibm.com/watson-advertising/news/introducing-the-new-weather-channel-app). IBM continues to push the boundaries of AI with [Project Debater](https://www.research.ibm.com/artificial-intelligence/project-debater/), which is the first AI system that can debate with humans on complex topics. + +In fact, after seeing the Watson Grand Challenge in 2011, Google expressed interest in using POWER for their own projects, and [the OpenPOWER Foundation was born](https://www-03.ibm.com/press/us/en/pressrelease/41684.wss). [The Foundation](https://openpowerfoundation.org/) is built around principles of partnership and collaboration, and enables individuals and companies alike to leverage POWER technology for their applications. + +One of our key goals at IBM is to lower the bar of entry to deploying AI. And as the CTO of IBM Power Systems for the UK and Ireland, I’ve witnessed the impact that POWER can have on ecosystems. A few years ago, I decided to try to deploy an AI application on POWER myself. I took inspiration from an OpenPOWER Foundation blog post, [Deep Learning Goes to the Dogs](https://openpowerfoundation.org/deep-learning-goes-to-the-dogs/), and decided to recreate their model to classify different dog breeds on my own IBM Power Systems server. + +I began by using the Stanford Dogs data set, which contains images of 120 breeds of dogs from around the world, and IBM Watson Machine Learning Community Edition (IBM WML CE, formerly known as PowerAI). IBM WML CE was created to simplify the aggregation of hundreds of open source packages necessary to build a single deep learning framework. I used it to make my dog classification work. + +The only problem was that it didn’t work in **all cases**. While my model was good enough to identify dogs in photos that I took of my children at Crufts, it kept tripping up on classifying dachshunds, a favourite of my daughter : + +![The model didn’t know how to classify dachshunds, before Elizabeth fixed it!](https://media.licdn.com/dms/image/C5612AQGzuDG7BBC4bA/article-inline_image-shrink_1000_1488/0?e=1580947200&v=beta&t=V6S5ToENlBXAG9ptru4masv27EHdOIKPslIFd_x3HXU) + +The problem here is that the dachshund was not included in the original 120-breed data set. My model didn’t know what a dachshund was. In order for it to recognise a dachshund, I needed to upload and label dozens of photos of dachshunds, usually in a specific format, which is a lot of work. + +Enter my daughter Elizabeth. + +Elizabeth is a big fan of dogs, and was happy to lend her expertise for the benefit of my project. + +PowerAI Vision makes it easy for someone like my daughter, a subject matter expert, to come in and do this work, instead of requiring it be done by a data scientist. It’s the key to democratising artificial intelligence. + +My daughter channelled her passion for and knowledge of dogs and whipped my model into shape in no time. + +![After my daughter trained the model to recognize dachshunds, using PowerAI Vision.](https://media.licdn.com/dms/image/C5612AQHncaW380qn5g/article-inline_image-shrink_1000_1488/0?e=1580947200&v=beta&t=j96X3kOiprgqaKhWFPia7IIgQYJ3vunxHa251roE7W8) + +“Okay, David,” you might be thinking. “Dogs are a fun topic, but let’s get serious. Why is classifying dachshunds so important to you?” + +Well, the truth is that through the OpenPOWER Foundation and tools like PowerAI, artificial intelligence models can be built for any number of applications. + +In fact, this **exact same** technology is being used in the UK to detect cancers. Predicting which patients with stage II colorectal cancer will suffer a recurrence after surgery is difficult. However, many are routinely prescribed chemotherapy, even though it may cause severe side effects. In some patients these can be fatal. [Oxford Cancer Biomarkers](https://oxfordbio.com/) (OCB) was established in 2012 to discover and develop biomarkers (a quantifiable biological parameter that provides insight into a patient’s clinical state) to advance personalized medicine within oncology, focusing on colorectal cancer and its treatments. On a personal note, my father was successfully treated for this cancer. OCB [partnered](https://meridianit.co.uk/ocb-case-study/) with IBM and the IBM Business Partner Meridian to apply PowerAI Vision (using Power Systems AC922 servers, which pair POWER9 CPUs and NVIDIA Tesla V100 with NVLink GPUs) to identify novel diagnostic biomarkers in tumor microenvironments, with the potential to enhance early diagnosis and treatment decisions. + +My daughter can use her expertise to help classify dog breeds - and now there’s no limit to how you can use your own expertise to make the world a better place. diff --git a/content/blog/how-the-ibm-globalfoundries-agreement-supports-openpowers-efforts.md b/content/blog/how-the-ibm-globalfoundries-agreement-supports-openpowers-efforts.md new file mode 100644 index 0000000..926668c --- /dev/null +++ b/content/blog/how-the-ibm-globalfoundries-agreement-supports-openpowers-efforts.md @@ -0,0 +1,18 @@ +--- +title: "How the IBM-GLOBALFOUNDRIES Agreement Supports OpenPOWER's Efforts" +date: "2014-10-22" +categories: + - "blogs" +--- + +By Brad McCredie, President of OpenPOWER and IBM Fellow and Vice President of Power Development + +On Monday IBM and GLOBALFOUNDRIES announced that they had signed a Definitive Agreement under which GLOBALFOUNDRIES plans to acquire IBM's global commercial semiconductor technology business, including intellectual property, world-class technologists and technologies related to IBM Microelectronics, subject to completion of applicable regulatory reviews. From my perspective as both OpenPOWER Foundation President and IBM's Vice President of Power Development, I'd like to share my thoughts with the extended OpenPOWER community on how this Agreement supports our collective efforts. + +This Agreement, once closed, will enhance the growing OpenPOWER ecosystem consisting of both IBM and non-IBM branded POWER-based offerings. While of course our OpenPOWER partners retain an open choice of semiconductor manufacturing partners, IBM's manufacturing base for our products will be built on a much larger capacity fab that should advantage potential customers. + +IBM's sharpened focus on fundamental semiconductor research, advanced design and development will lead to increased innovation that will benefit all OpenPOWER Foundation members. IBM will extend its global semiconductor research and design to advance differentiated systems leadership and innovation for a wide range of products including POWER based OpenPOWER offerings from our members. IBM continues its previously announced $3 billion investment over five years for semiconductor technology research to lead in the next generation of computing. + +IBM remains committed to an extension of the open ecosystem using the POWER architecture; this Agreement does not alter IBM's commitment to the OpenPOWER Foundation. This announcement is consistent with the goals of the OpenPOWER Foundation to enable systems developers to create more powerful, scalable and energy-efficient technology for next-generation data centers. The full stack -- beginning at the chip and moving all the way to middleware software -- will drive systems value in the future. IBM and the members of the OpenPOWER Foundation will continue to lead the challenge to extend the promise that Moore’s Law could not fulfill, offering end-to-end systems innovation through our robust collaboration model. + +Today's Agreement reaffirms IBM's commitment to move towards world-class systems -- both those offered by IBM and those built by our OpenPOWER partners that leverage POWER's open architecture -- that can handle the demands of new workloads and the unprecedented amount of data being generated. I look forward to our continued work together, as IBM extends its semiconductor research and design capabilities for open innovation for cloud, mobile, big data analytics, and secure transaction-optimized systems. diff --git a/content/blog/how-ubuntu-is-enabling-openpower-and-innovation-randall-ross-canonical.md b/content/blog/how-ubuntu-is-enabling-openpower-and-innovation-randall-ross-canonical.md new file mode 100644 index 0000000..a30b4c2 --- /dev/null +++ b/content/blog/how-ubuntu-is-enabling-openpower-and-innovation-randall-ross-canonical.md @@ -0,0 +1,36 @@ +--- +title: "How Ubuntu is enabling OpenPOWER and innovation Randall Ross (Canonical)" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Objective + +Geared towards a business audience that has some understanding of POWER and cloud technology, and would like to gain a better understanding of how their combination can provide advantages for tough business challenges. + +### Abstract + +Learn how Canonical's Ubuntu is enabling OpenPOWER solutions and cloud-computing velocity. Ubuntu is powering the majority of cloud deployments. Offerings such as Ubuntu Server, Metal-as-a-service (MAAS), hardware provisioning, orchestration (Juju, Charms, and Charm Bundles), workload provisioning, and OpenStack installation technologies simplify managing and deploying OpenPOWER based solutions in OpenStack, public, private and hybrid clouds. OpenPOWER based systems are designed for scale-out and scale-up cloud and analytics workloads and are poised to become the go-to solution for the world’s (and your businesses’) toughest problems. + +This talk will focus on the key areas of OpenPOWER based solutions, including + +- Strategic POWER8 workloads +- Solution Stacks that benefit immediately from OpenPOWER +- CAPI (Flash, GPU, FPGA and acceleration in general) +- Service Orchestration +- Ubuntu, the OS that fully supports POWER8 +- Large Developer Community and mature development processes +- Ubuntu’s and OpenPOWER’s Low-to-no barrier to entry + +### Speaker names / Titles + +Randall Ross (Canonical’s Ubuntu Community Manager, for OpenPOWER & POWER8) Jeffrey D. Brown (IBM Distinguished Engineer,  Chair of the OpenPOWER Foundation Technical Steering Committee) _\- proposed co-presenter, to be confirmed_ + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Randall-Ross_OPFS2015_Canonical_031715.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/hpc-solution-stack-on-openpower.md b/content/blog/hpc-solution-stack-on-openpower.md new file mode 100644 index 0000000..37eaaf1 --- /dev/null +++ b/content/blog/hpc-solution-stack-on-openpower.md @@ -0,0 +1,46 @@ +--- +title: "HPC solution stack on OpenPOWER" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Introduction to Authors + +Bin Xu: Male, IBM STG China, advisory software engineer, PCM architect, mainly focus on High Performance Computing and Software Define environment. + +Jing Li: Male, IBM STG China, development manager for PCM/PHPC. + +### Background + +OpenPOWER will be one of major platforms used in all kinds of industry area, especially in High Performance Computing (HPC). IBM Platform Cluster Manager (PCM) is the most popular cluster management software aiming to simplify the system and workload management in data center. + +### Challenges + +As a brand new platform based on IBM POWER technology, the customer is asking if their end to end applications even total solutions can be well running on OpenPOWER. + +Our experience: This demo will show the capability of IBM OpenPOWER that can be the foundation of the complicated High Performance Computing complete solution. From the HPC cluster deployment, job scheduling, system management, application management to the science computing workloads on top of them, all these components can be well constructed on top of IBM OpenPOWER platform with good usability and performance. Also this demo shows the simplicity of migrating a complete x86 based HPC stack to the OpenPOWER platform.  In this demo, the Platform Cluster Manager (PCM) and xCat will serve as the deployment and management facilitators of the solution, the Platform HPC will be the total solution integrated with Platform LSF (Load Sharing Facility), Platform MPI, and other HPC related middleware, and two of the popular HPC applications will be demonstrated on this stack. + +[![Abstractimage1](images/Abstractimage1-300x269.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/01/Abstractimage1.jpg) + +There are three steps in above: + +- Admin installs the head node. +- Admin discovery other nodes and provision them to join the HPC cluster automatically. +- User runs their HPC application and monitoring the cluster on dashboard. + +### Benefit + +Faster and easy to deploy the HPC cluster environment based on OpenPOWER technology, and provide the system management, workload management with great usability for OpenPOWER HPC. + +### Next Steps and Recommendations + +Integration with other application in OpenPOWER environment + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Jing-Li_OPFS2015_IBM_S5700-HPC-Solution-Stack-on-OpenPOWER.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/ibm-announces-new-open-source-contributions-at-openpower-summit-europe-2019.md b/content/blog/ibm-announces-new-open-source-contributions-at-openpower-summit-europe-2019.md new file mode 100644 index 0000000..b07ca15 --- /dev/null +++ b/content/blog/ibm-announces-new-open-source-contributions-at-openpower-summit-europe-2019.md @@ -0,0 +1,68 @@ +--- +title: "IBM Announces New Open Source Contributions at OpenPOWER Summit Europe 2019" +date: "2020-01-22" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "openpower-foundation" + - "opencapi" + - "power-isa" + - "oc-accel" + - "capi-flashgt" + - "open-source" +--- + +By: Mendy Furmanek, Director, OpenPOWER Processor Enablement, IBM and President, OpenPOWER Foundation + +2019 was an important year for the OpenPOWER Foundation - especially the second half of the year. In the course of a few months, our ecosystem became even more open and the POWER architecture became more accessible to all. + +In August, IBM made a major announcement at OpenPOWER Summit North America by [open-sourcing the POWER ISA](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) as well as numerous key hardware reference designs. With these announcements, IBM became the only architecture with a stack that is entirely open system - from the foundation of the processor ISA through the software stack. + +![IBM has a completely open system, from the processor ISA to the software stack.](images/IBM-1.png) + +With exploding amounts of data involved in modern workloads, we believe that open source hardware and an innovative ecosystem is key for the industry. So to lead the industry forward in that direction, we’ve continued to make additional contributions to the open source community. + +Then, I announced two new contributions at OpenPOWER Summit Europe in October, both dealing with CAPI FlashGT and OpenCAPI technology. + +**CAPI FlashGT - Accelerated NVMe Controller FPGA IP** + +![CAPI FlashGT](images/IBM-2.png)CAPI Flash has already been available, but our open-sourcing of the FlashGT component makes the entire CAPI Flash stack completely open. + +Each time an application runs a system call to the operating system, it adds latency - time and overhead in the kernel stack. FlashGT takes a portion of that process and moves it from software to hardware, so much of the kernel instructions and interface is not needed in the software stack. The end result is a faster and more efficient process - lower latency, higher bandwidth. + +With a reduction of instructions running on the CPU / core, there can be a dramatic increase in CPU offload. Initial performance testing shows significant improvements: + +- 6x 4k random read IOPs per core +- 2.5x 4k random write IOPs per core + +More information on [CAPI FlashGT can be found here.](https://github.com/open-power/capi2-flashgt) + +**OpenCAPI Acceleration Framework (OC-Accel)** + +OC-Accel is the Integrated Development Environment (IDE) for creating application FPGA-based accelerators. Put simply, it enables virtual memory sharing among processors and OpenCAPI devices. + +![OpenCAPI Acceleration Framework (OC-Accel)](images/IBM-3.png) + +Numerous layers of logic are needed to create an OpenCAPI device, including physical, data link and transportation layers. These have been available previously. But our open-sourcing of the OC-Accel bridge makes everything needed for an OpenCAPI device available today. + +![OpenCAPI Acceleration Framework (OC-Accel)](images/IBM-4.png) + +OC-Accel includes: + +- Hardware logic to hide the details of TLX protocol +- Software libraries for application code to communicate with +- Scripts and strategies to construct an FPGA project +- Simulation environment +- Workflow for coding, debugging, implementation and deployment +- High level synthesis support +- Examples and documents to get started + +More information on [OC-Accel can be found here](https://github.com/OpenCAPI/oc-accel). + +Now in 2020, we are still at the beginning of our open source journey. When we look at the world today, we know that the only way for the industry to succeed is through open collaboration - a rising tide lifts all boats, as the saying goes. We’re proud to be part of the movement that is enabling the ecosystem to innovate more quickly with our IP and making great strides in computing. Thank you for being a part of the movement with us! + +Please view my full session from OpenPOWER Summit Europe 2019 below. + + diff --git a/content/blog/ibm-hopes-its-enhanced-power8-chip-will-take-on-intels-x86.md b/content/blog/ibm-hopes-its-enhanced-power8-chip-will-take-on-intels-x86.md new file mode 100644 index 0000000..d53d530 --- /dev/null +++ b/content/blog/ibm-hopes-its-enhanced-power8-chip-will-take-on-intels-x86.md @@ -0,0 +1,10 @@ +--- +title: "IBM hopes its enhanced Power8 chip will take on Intel’s x86" +date: "2014-06-27" +categories: + - "press-releases" + - "industry-coverage" + - "blogs" +--- + +BANGALORE, JUNE 27: IBM will use its huge India software developer base to work on its new Power8 chips to challenge Intel’s dominance.The world’s largest software company has launched its new Power8 chip architecture, an enhancement over its earlier version, to take on Intel’s Xeon chips or x86 widely used in data centres and server computers worldwide. diff --git a/content/blog/ibm-is-changing-the-server-game.md b/content/blog/ibm-is-changing-the-server-game.md new file mode 100644 index 0000000..ec5030f --- /dev/null +++ b/content/blog/ibm-is-changing-the-server-game.md @@ -0,0 +1,8 @@ +--- +title: "IBM is changing the server game" +date: "2014-04-30" +categories: + - "blogs" +--- + +There was something I missed on the IBM strategy when they sold x86 branch to Lenovo. Since I read some articles about OpenPower and Google home made first power8 server, this strategy is making more sense. diff --git a/content/blog/ibm-portal-openpower.md b/content/blog/ibm-portal-openpower.md new file mode 100644 index 0000000..9e7b98f --- /dev/null +++ b/content/blog/ibm-portal-openpower.md @@ -0,0 +1,16 @@ +--- +title: "IBM Portal for OpenPOWER launched for POWER series documentation, system tools and development collaboration" +date: "2017-03-30" +categories: + - "blogs" +--- + +_By Andy Pearcy-Blowers, OpenPOWER Applications Engineer and IBM Portal for OpenPOWER Co-Lead & Luis Armenta, Sr. SI Engineer, Project Manager and IBM Portal for OpenPOWER Lead_ + +This week, OpenPOWER member IBM launched its new website, the "[IBM Portal for OpenPOWER](https://www.ibm.com/systems/power/openpower)". The IBM Portal for OpenPOWER was developed to provide a central location for documentation on Power Systems servers. The IBM Portal for OpenPOWER gives users the ability to quickly find material of interest, including but not limited to: Users’ Manuals, Datasheets, Reference Design documentation, Firmware Training, and more, to foster innovation in developing around POWER. + +This new portal replaces IBM Customer Connect's OpenPOWER Connect space that OpenPOWER Members and other OpenPOWER interested parties may have used in the past. + +Throughout 2017 additional functionality and applications will be deployed to the IBM Portal for OpenPOWER. Examples of functionality improvements include enhancements to the: search function, social tools, documentation repository and subscription tools.  Examples of application implementations include a new Collaboration Center, System Tools, Issues Management and more.   The Collaboration Center will provide OpenPOWER partners, during development with IBM, the ability to: securely share files, screen share, track milestones and more.  The System Tools application will provide entitled OpenPOWER partners the ability to: download tools like HTX, Cronus, HSSCDR & more to use while developing and verifying their system design.  The Issues Management application will allow any user the ability to submit questions, issues and requests for support to IBM. + +To visit the site and start developing around POWER go to: [www.ibm.com/systems/power/openpower](https://www.ibm.com/systems/power/openpower)​. diff --git a/content/blog/ibm-power8-outperforms-x86-on-financial-benchmarks.md b/content/blog/ibm-power8-outperforms-x86-on-financial-benchmarks.md new file mode 100644 index 0000000..9e11efb --- /dev/null +++ b/content/blog/ibm-power8-outperforms-x86-on-financial-benchmarks.md @@ -0,0 +1,9 @@ +--- +title: "IBM Power8 Outperforms x86 on Financial Benchmarks" +date: "2015-06-10" +categories: + - "press-releases" + - "blogs" +--- + + diff --git a/content/blog/ibm-sheeltron-ai-hipc.md b/content/blog/ibm-sheeltron-ai-hipc.md new file mode 100644 index 0000000..5c8ed46 --- /dev/null +++ b/content/blog/ibm-sheeltron-ai-hipc.md @@ -0,0 +1,26 @@ +--- +title: "IBM and Sheeltron Digital Systems Showcase AI Solutions at HiPC 2018" +date: "2019-01-17" +categories: + - "blogs" +tags: + - "featured" +--- + +By [Ranganath V](https://www.linkedin.com/in/ranganath-v-b944a050/)., Vice President, Sheeltron Digital Systems Private Limited + +[![](images/HiPC-1.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2019/01/HiPC-1.png) + +  + +[HiPC 2018](https://hipc.org/) – the 25th edition of the IEEE International Conference on High Performance Computing, Data and Analytics – was held in December, 2018 in India. The conference served as a forum for researchers from around the world to present work, and to highlight high performance computing activities in Asia. The meeting focused on all aspects of high performance computing systems and their scientific, engineering and commercial applications. + +IBM participated in the conference and showcased artificial intelligence solutions on IBM systems running Power9 processors with Nvidia GPU cards. IBM also supported Sheeltron Digital Systems Pvt. Ltd. with including a booth in the HiPC Showcase. + +IBM’s cloud instance showcased artificial intelligence solutions like vehicle driving automation and machine learning. It was a great opportunity to explain the advantages of IBM’s Power9 processor architecture with its high memory bandwidth, CAPI and NVLink, as well as high processor clock speeds and higher cache memory. + +[![](images/HiPC-2-1024x802.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2019/01/HiPC-2.png) + +Throughout the conference, many researchers in attendance were exposed for the first time to the OpenPOWER Foundation and its objective to create an open ecosystem using the Power architecture. + +[![](images/HiPC-3.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2019/01/HiPC-3.png) diff --git a/content/blog/ibm-xilinx-collaborating-pcie-openpower.md b/content/blog/ibm-xilinx-collaborating-pcie-openpower.md new file mode 100644 index 0000000..aaf32a1 --- /dev/null +++ b/content/blog/ibm-xilinx-collaborating-pcie-openpower.md @@ -0,0 +1,46 @@ +--- +title: "My Generation: Open Standards Help Members Push Next PCIe Milestone Forward" +date: "2017-05-19" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "xilinx" + - "openpower-foundation" + - "pcie" + - "pci-e" + - "pci-express" + - "power-architecture" + - "collaboration" + - "innovation" +--- + +_By Jeff A. Stuecheli, Power Systems Architect, IBM Cognitive Systems_ + +Earlier this week, another validation of the OpenPOWER Foundation’s model of collaborative innovation was announced when members [Xilinx and IBM revealed that they are working together to maximize the potential of the next generation of the PCIe interface, Gen4](https://www.xilinx.com/news/press/2017/xilinx-and-ibm-first-to-double-interconnect-performance-for-accelerated-cloud-computing-with-new-pci-express-standard.html), on OpenPOWER. In a statement, Xilinx wrote: + +> _“Together with IBM, the two companies are first to double interconnect performance between an accelerator and CPU through the use of PCI Express Gen4 compared to the existing widely-deployed PCI Express Gen3 standard. Gen4 doubles the bandwidth between CPUs and accelerators to 16 Gbps per lane, thereby accelerating performance in demanding data center applications such as artificial intelligence and data analytics.”_ + +\[caption id="attachment\_4795" align="aligncenter" width="625"\][![OpenPOWER Foundation members IBM and Xilinx are working together to maximize the next generation of the PCIe interface, Gen4, on OpenPOWER.](images/IBM-_-XILINX-1024x477.png)](https://openpowerfoundation.org/wp-content/uploads/2017/05/IBM-_-XILINX.png) OpenPOWER Foundation members IBM and Xilinx are working together to maximize the next generation of the PCIe interface, Gen4, on OpenPOWER.\[/caption\] + +Former OpenPOWER Foundation President and IBM VP and Fellow Brad McCredie added, “This leadership in PCI Express is another reason that POWER architecture is being deployed in modern data centers. IBM is excited to leverage the underlying performance of PCIe Express Gen4 for CAPI 2.0 which eases the programming experience for application developers.” + +## Collaborative Innovation Strikes Again + +While other vendors decided to move to proprietary standards on proprietary platforms, IBM, Xilinx, Mellanox Technologies and other companies realized early on that by working together and pooling their collective expertise, they could be pioneers for the next generation of PCIe + +And collaboration drives innovation. A key covenant of the OpenPOWER strategy is the aggressive adoption of industry leading open interface standards, to facilitate the integration of best of breed silicon technology. + +"We believe in open standards," said Ivo Bolsens, CTO at Xilinx. "It's gratifying to see this milestone between our companies that will alleviate significant performance bottlenecks in accelerated computing, particularly for data center computing." + +“Collaborations between companies have and will enable the best of breed technologies and solutions in the market, that enable the highest data center return on investment”, said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Mellanox was the first to enable PCIe Gen4 network adapters, which can connect to PCIe Gen4 CPUs, FPGAs and more, accelerating data throughput, resulting in higher applications performance, efficiency and scalability.” + +Beyond the raw speed of PCIe Gen 4, the next generation coherent accelerator technology (CAPI 2.0) is provided between the POWER9 and Xilinx FPGAs.  This industry leading protocol enables the efficient integration of FPGA based accelerators and POWER9 CPUs, with greatly reduced communication overheads. Already the potential of PCIe Gen4 on the OpenPOWER platform is opening up intriguing use cases. You can see how people are already using Xilinx FPGAs on OpenPOWER with CAPI in our CAPI series: + +- [Pt 1: Accelerating Business Applications in the Data-Driven Enterprise with CAPI](https://openpowerfoundation.org/blogs/capi-drives-business-performance/) +- [Pt 2: Using CAPI and Flash for larger, faster NoSQL and analytics](https://openpowerfoundation.org/blogs/capi-and-flash-for-larger-faster-nosql-and-analytics/) +- [Pt 3: Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI](https://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/) +- [Pt 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/) + +In the data centric world of cognitive computing, the ability to drive high communication rates between high performance devices is paramount. As PCIe Gen4 on CAPI 2.0 doubles that rate, systems can now transfer data in half the time, relieving a key bottleneck. Let us know in the comments below how you plan to utilize PCIe Gen4 on OpenPOWER! diff --git a/content/blog/ibmopenpower-success-2017-volume-sales.md b/content/blog/ibmopenpower-success-2017-volume-sales.md new file mode 100644 index 0000000..13bba66 --- /dev/null +++ b/content/blog/ibmopenpower-success-2017-volume-sales.md @@ -0,0 +1,11 @@ +--- +title: "For IBM/OpenPOWER: Success in 2017 = (Volume) Sales" +date: "2017-01-17" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + + diff --git a/content/blog/ibms-first-openpower-server-targets-hpc-workloads.md b/content/blog/ibms-first-openpower-server-targets-hpc-workloads.md new file mode 100644 index 0000000..076146b --- /dev/null +++ b/content/blog/ibms-first-openpower-server-targets-hpc-workloads.md @@ -0,0 +1,49 @@ +--- +title: "IBM’s First OpenPOWER Server Targets HPC Workloads" +date: "2015-03-20" +categories: + - "press-releases" + - "blogs" +--- + +The first annual OpenPOWER Summit, held this week in San Jose, Calif., in tandem with NVIDIA’s GPU Technology Conference (GTC), launched a raft of hardware and other announcements intended to cede market share from Intel. On Wednesday, foundation members showed off more than a dozen hardware solutions, an assortment of systems, boards, and cards, and even a new microprocessor customized for China. Much of the gear is targeted at hyperscale datacenters, where x86 reigns supreme, but there was something for the HPC space too. + +Included in the lot proudly displayed at the front of the packed conference room were the world’s first non-IBM branded OpenPOWER commercial server from Tyan; a prototype of Firestone, IBM’s first OpenPOWER server and great exascale hope; and the first GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950. Yes, that’s a lot of firsts, and it’s a pretty impressive lineup for an organization that is just entering its second year. + +Incorporated in December 2013 by IBM, NVIDIA, Mellanox, Google and Tyan, the foundation has expanded to more than 110 businesses, organizations and individuals spanning 22 countries. Innovations being pursued by OpenPOWER members include custom systems for large or warehouse scale datacenters, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, and advanced hardware technology exploitation. + +The OpenPOWER architecture includes SOC design, bus specifications, reference designs, and well as open source firmware, operating system, and server virtualization hypervisor (POWER8 variant of KVM). Little Endian Linux is being used to facilitate software migration to POWER. Such features were covered during the Wednesday keynotes and discussed inside the numerous OpenPOWER-themed sessions. + +The first HPC-oriented OpenPOWER play is the IBM Power8 server, codenamed Firestone. Due out later this year, the server is manufactured by Taiwan’s Wistron, sold by IBM, and combines the technologies of NVIDIA and Mellanox. + +“The prototype of IBM’s system revealed today is the first in a series of new high-density Tesla GPU-accelerated servers for OpenPOWER high-performance computing and data analytics,” commented Sumit Gupta, general manager of accelerated computing at NVIDIA. + +Firestone already has a lot riding on it. Speaking on Wednesday of the company’s technical computing roadmap, Brad McCredie, president of OpenPOWER and an IBM fellow, said that IBM provided the U.S. Department of Energy with a Firestone motherboard to support development of the CORAL machines, Summit and Sierra. The 2017-era supercomputers are expected to be five to 10 times faster than their predecessors and will use POWER9 chips in tandem with multiple NVIDIA Tesla Volta GPUs. + +John Ashley, senior IBM software developer relations manager at NVIDIA, spoke about his company’s role in the collaboration. + +“One of the things we really believe in is heterogeneous computing,” he said. “The GPU is not the best processor for every task. There are things GPUs aren’t good at. It happens to be the case that those things we \[at NVIDIA\] are not so good at, POWER processors are great at. The POWER processor is among the fastest, most capable serial processors on the planet and it’s only getting better, so being able to put those two things together seems like a really natural fit and so that’s a big part of why this event is here at GTC.” + +[![Cabot OpenPOWER Intertwined Technology Trends data-centric](images/Cabot-OpenPOWER-Intertwined-Technology-Trends-data-centric.png)](http://6lli539m39y3hpkelqsm3c2fg.wpengine.netdna-cdn.com/wp-content/uploads/2015/03/Cabot-OpenPOWER-Intertwined-Technology-Trends-data-centric.png)Hyperscale and cloud datacenters may be the most prominent target for OpenPOWER products but IBM wants it known that there is much value here for high performance computing (HPC) clients as well. A [position paper](http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=WH&infotype=SA&appname=STGE_PO_PO_USEN&htmlfid=POL03229USEN&attachment=POL03229USEN.PDF#loaded&cmp=ibmsocial&ct=stg&cr=sc&cm=h&ccy=us) undertaken by Srini Chari, Ph.D., MBA, managing partner of Cabot Partners, at the behest of IBM, titled Crossing the Performance Chasm with OpenPOWER, looks at the performance resulting from OpenPOWER-designed IBM POWER8 solutions versus x86 platforms with Intel chips. Both traditional HPC and newer data intensive analytics workflows are examined. + +Called into question are compute-intensive benchmarks, like LINPACK, which are deemed to be increasingly inadequate for guiding HPC purchasing decisions. “It is clear,” states the author, “that the performance of most practical HPC applications also depend on memory, I/O and network and not exclusively on Flops/core and the number of cores.” + +The figure below illustrates early results of standard benchmarks that are good indicators of data-centric HPC performance: + +[![OpenPOWER POWER8 position paper data-centric fig8](images/OpenPOWER-POWER8-position-paper-data-centric-fig8.png)](http://6lli539m39y3hpkelqsm3c2fg.wpengine.netdna-cdn.com/wp-content/uploads/2015/03/OpenPOWER-POWER8-position-paper-data-centric-fig8.png) + +This figure summarizes some well-known application benchmarks across various sectors: + +[![OpenPOWER POWER8 position paper fig9](images/OpenPOWER-POWER8-position-paper-fig9.png)](http://6lli539m39y3hpkelqsm3c2fg.wpengine.netdna-cdn.com/wp-content/uploads/2015/03/OpenPOWER-POWER8-position-paper-fig9.png) + +What makes these kinds of performance enhancements possible? When it comes to HPC workloads, here is a list of the most significant features of IBM Power systems based on POWER8, according to the report: + +1. Massive Threads: Each POWER8 core is capable of handling eight hardware threads simultaneously for a total of 96 threads executed simultaneously on a 12-core chip. +2. Large Memory Bandwidth: Very large amounts of on- and off-chip eDRAM caches and on-chip memory controllers enable very high bandwidth to memory and system I/O. +3. High performance processor: POWER8 is capable of clock speeds around 4.15GHz, with a Thermal Design Power (TDP) in the neighborhood of 250 watts. +4. Excellent RAS: Many studies (e.g., [here](http://public.dhe.ibm.com/common/ssi/ecm/en/pol03161usen/POL03161USEN.PDF) and [here](http://www.ibm.com/systems/power/solutions/assets/bigdata-analytics.html)) across a range of enterprises have indicated that IBM Power Systems perform better than x86 systems in Reliability, Availability and Serviceability (RAS), performance, TCO, security and overall satisfaction. + +5. Coherent Accelerator Processor Interface (CAPI): CAPI, a direct link into the CPU, allows peripherals and coprocessors to communicate directly with the CPU, substantially bypassing operating system and driver overheads. IBM has developed CAPI to be open to third party vendors and even offers design enablement kits. In the case of flash memory attached via CAPI, the overhead is reduced by a factor of 24:1. More importantly though, CAPI can be used to attach accelerators like FPGAs — directly to the POWER8 CPU for significant workload-specific performance boosts. +6. Open partner ecosystem with the OpenPOWER Foundation. + +[![GTC15 POWER8 processor](images/GTC15-POWER8-processor.png)](http://6lli539m39y3hpkelqsm3c2fg.wpengine.netdna-cdn.com/wp-content/uploads/2015/03/GTC15-POWER8-processor.png) diff --git a/content/blog/iit-bombay-openpower-meetup.md b/content/blog/iit-bombay-openpower-meetup.md new file mode 100644 index 0000000..0c80300 --- /dev/null +++ b/content/blog/iit-bombay-openpower-meetup.md @@ -0,0 +1,16 @@ +--- +title: "IIT Bombay OpenPOWER Meetup" +date: "2017-04-10" +categories: + - "blogs" +tags: + - "featured" +--- + +About 25 participants came to attend the OpenPOWER Meetup in IIT Bombay. We had the following groups: IITB (4 groups), VIT Pune one group (working on Deep learning on GPUs), RAIT Navi Mumbai one group (working on fractional calculus), DJ Sanghvi Mumbai One group (working on embedded supercomputing and image processing). + +The IITB groups presented their work on CAPI, Remote satellite image processing and HPC, Cryptography and HPC, Global Optimization and Robust control (HPC), Deep learning (using Caffe) for Jet engine modelling, etc. + +Visit http://www.oprfiitb.in/ to get access to the OpenPOWER research cluster for your research activities. + +[https://www.linkedin.com/pulse/iit-bombay-openpower-meetup-ganesan-narayanasamy](https://www.linkedin.com/pulse/iit-bombay-openpower-meetup-ganesan-narayanasamy) diff --git a/content/blog/imperial-college-london-and-ibm-join-forces-to-accelerate-personalized-medicine-research-within-the-openpower-ecosystem.md b/content/blog/imperial-college-london-and-ibm-join-forces-to-accelerate-personalized-medicine-research-within-the-openpower-ecosystem.md new file mode 100644 index 0000000..68ff2e5 --- /dev/null +++ b/content/blog/imperial-college-london-and-ibm-join-forces-to-accelerate-personalized-medicine-research-within-the-openpower-ecosystem.md @@ -0,0 +1,36 @@ +--- +title: "Imperial College London and IBM Join Forces to Accelerate Personalized Medicine Research within the OpenPOWER Ecosystem" +date: "2015-07-15" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Dr. Jane Yu, Solution Architect for Healthcare & Life Sciences, IBM_ + +When the [Human Genome Project](http://www.genome.gov/10001772) was completed in 2003, it was a watershed moment for the healthcare and life science industry. It marked the beginning of a new era of personalized medicine where the treatment of disease could be tailored to the unique genetic code of individual patients. + +We're closer than ever to fully tailored treatment. To accelerate advances in personalized medicine research, IBM Systems is partnering with the Data Science Institute of Imperial College London (DSI) and its leading team of bioinformatics and data analysis experts. At the heart of this collaboration is tranSMART, an open-source data warehouse and knowledge management system that has already been adopted by commercial and academic research organizations worldwide as a preferred platform for integrating, accessing, analyzing, and sharing clinical and genomic data on very large patient populations. DSI and IBM Systems will be partnering to enhance the performance of the tranSMART environment within the OpenPOWER ecosystem by taking advantage of the speed and scalability of IBM POWER8 server technology, IBM Spectrum Scale storage, and IBM Platform workload management software. + +At ISC 2015 in Frankfurt, representatives from Imperial College DSI and IBM Systems will be demonstrating an early prototype of a personalized medicine research environment in which tranSMART is directly linked to IBM text analytics for mining curated scientific literature on POWER8. For a demonstration, please visit us at IBM booth #928 at ISC. + +How did we get here? In recent years, the advent of Next Generation Sequencing (NGS) technologies has significantly reduced the cost and time required to sequence whole human genomes: It took roughly $3B USD to sequence the first human genome across 13 years of laboratory bench work; today, a single human genome can be sequenced for roughly $1,000 USD in less than a day. + +The task of discovering new medicines and related diagnostics based on genomic information requires a clear understanding of the impact that individual sequence variations have on clinical outcomes. Such associations must be analyzed in the context of prior medical histories and other environmental factors. But this is a computationally daunting task: deriving such insights require scientists to access, process, and analyze genomic sequences, longitudinal patient medical records, biomedical images, and other complex, information-rich data sources securely within a single compute and storage environment. Scientists may also want to leverage the corpus of peer-reviewed scientific literature that may already exist about the genes and molecular pathways influencing the disease under study. Computational workloads must be performed across thousands of very large files containing heterogeneous data, where just a single file containing genomic sequence data alone can be on the order of hundreds of megabytes. Moreover, biological and clinical information critical to the study must be mined from natural language, medical images, and other non-traditional unstructured data types at very large scale. + +As drug development efforts continue to shift to increasingly complex and/or exceedingly rare disease targets, the cost of bringing a drug to market is projected to top $2.5B USD in 2015, up from about $1B USD in 2001. The ability of government, commercial, and academic research organizations to innovate in personalized medicine requires that the compute-intensive workloads essential to these efforts run reliably and efficiently. IBM Systems has the tools to deliver. + +The high-performance compute and storage architecture must have the flexibility to address the application needs of individual researchers, the speed and scale to process rapidly expanding stores of multimodal data within competitive time windows, and the smarts to extract facts from even the most complex unstructured information sources. The financial viability of these initiatives depends on it. The tranSMART environment addresses each of these critical areas. + +Code which demonstrates marked improvements in the performance and scalability of tranSMART on POWER systems will be donated back to the tranSMART open-source community. Early performance gains have already been seen on POWER8. In addition, IBM Systems will be working with DSI, IBM Watson, and other IBM divisions to enable large-scale text analytics, natural language processing, machine learning, and data federation capabilities within the tranSMART – POWER analytical environment. + +We look forward to seeing you at ISC to show you how OpenPOWER’s HPC capabilities are helping to improve personalized medicine and healthcare. + +_About Dr. Jane Yu_ + +_Jane Yu, MD, PhD is a Worldwide Solution Architect for Healthcare & Life Science within IBM Systems. Dr. Yu has more than 20 years of experience spanning clinical healthcare, biomedical research, and advanced analytics. Since joining IBM in 2011, Dr. Yu has been building on-premise and cloud-based data management and analytics systems that enable leading edge clinical and basic science research. She holds an MD and a PhD in Biomedical Engineering from Johns Hopkins University School of Medicine, and a Bachelor of Science in Aeronautics & Astronautics from the Massachusetts Institute of Technology._ + +\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ + +Joseph A. DiMasi, slides: “Innovation in the Pharmaceutical Industry: New Estimates of R&D Costs,” Tufts Center for the Study of Drug Development, November 18, 2014. diff --git a/content/blog/infosystems-announces-live-demonstration-capability-for-ibm-power8-in-their-chattanooga-test-facility.md b/content/blog/infosystems-announces-live-demonstration-capability-for-ibm-power8-in-their-chattanooga-test-facility.md new file mode 100644 index 0000000..fcbc18d --- /dev/null +++ b/content/blog/infosystems-announces-live-demonstration-capability-for-ibm-power8-in-their-chattanooga-test-facility.md @@ -0,0 +1,9 @@ +--- +title: "InfoSystems Announces Live Demonstration Capability for IBM POWER8 in their Chattanooga Test Facility" +date: "2014-05-13" +categories: + - "press-releases" + - "blogs" +--- + +CHATTANOOGA, Tenn., May 13, 2014 (BUSINESS WIRE) -- [InfoSystems](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.infosystems.biz%2F&esheet=50863539&newsitemid=20140513005153&lan=en-US&anchor=InfoSystems&index=1&md5=a2013701d529f22104227c8bec0c9cda) announces it has become one of the first IBM business partners to install IBM’s newest Power Systems servers, built on the recently announced POWER8 processor, in its Chattanooga IBM Business Partner Innovation Center (BPIC). InfoSystems’ experts can show customers this new technology before the general availability date via an on-site demo, or remotely from a customer’s location. diff --git a/content/blog/innov8-with-power8-best-in-show-selected-at-interconnect2015.md b/content/blog/innov8-with-power8-best-in-show-selected-at-interconnect2015.md new file mode 100644 index 0000000..083a44d --- /dev/null +++ b/content/blog/innov8-with-power8-best-in-show-selected-at-interconnect2015.md @@ -0,0 +1,40 @@ +--- +title: "Innov8 with POWER8 “Best in Show” Selected at InterConnect2015" +date: "2015-02-26" +categories: + - "blogs" +--- + +By Terri Virnig, Vice President of Power Ecosystem and Strategy, IBM Systems + +It is hard to believe that this semester’s [Innov8 with POWER8 Challenge](http://www-03.ibm.com/systems/power/education/academic/university-challenge.html) has come to a close. Over the course of the fall semester, we worked with three top universities – North Carolina State University, Oregon State University and Rice University – to provide computer science seniors and graduate students with Power Systems and the associated stack environment needed to work on two projects each. The students set out to tackle real-world business challenges with each of their projects, pushing the limits of what’s possible and gaining market-ready career skills and knowledge. To further support the collaboration, each of the universities were given an opportunity to work with industry leaders from the [OpenPOWER Foundation](http://www.openpowerfoundation.org/), including Mellanox, NVIDIA and Altera. + +This week at [IBM InterConnect2015](http://www-01.ibm.com/software/events/interconnect/?cmp=ibmsocial&ct=stg&cr=sc&cm=h&ccy=us&ce=ISM0213&ct=sc&cmp=ibmsocial&cm=h&cr=crossbrand&ccy=us) in Las Vegas, students and their professors from each of the participating universities had the opportunity to showcase their projects at our InterConnect Solution Expo. The conference attracted more than 20,000 attendees and provided them with the opportunity to share the countless hours of research and hard work each of them have put into their projects. + +![Innov8 with POWER8](images/Innov8-with-POWER8-300x225.jpg) + +For those who have been following along over the past few months, you know that we’ve given our community of followers the opportunity to vote for their favorite project with our ‘Tweet your vote’ social voting platform. Before I share the winner of our “Best in Show” recognition, here’s a quick rundown of how the universities have taken advantage of the Power platform to work on truly innovative projects: + +![Innov8 with POWER8_NC State](images/Innov8-with-POWER8_NC-State.jpg) **North Carolina State University (NCSU)** + +- NCSU’s projects addressed real-world bottlenecks in deploying big data solutions. NCSU built up a strong set of skills in big data, working with the Power team to push the boundaries in delivering what clients need, and these projects extended their work to the next level, by taking advantage of the accelerators that are a core element of the POWER8 value proposition. +- Project #1 focused on big data optimization, accelerating the preprocessing phase of their big data pipeline with power-optimized, coherently attached reconfigurable accelerators in FPGAs from Altera. The team assessed the work from the IBM Zurich Research Laboratory on text analytics acceleration, aiming to eventually develop their own accelerators. +- Project #2 focused on smart storage. The team worked on leveraging the Zurich accelerator in the storage context as well. + + **![Innov8 with POWER8_OSU](images/Innov8-with-POWER8_OSU.jpg) Oregon State University (OSU)** + +- OSU’s Open Source Lab has been a leader in open source cloud solutions on Power Systems, even providing a cloud solution hosting more than 160 projects. With their projects, OSU aimed to create strong Infrastructure as a Service (IaaS) offerings, leveraging the network strengths of Mellanox, as well as improving the management of the cloud solutions via a partnership with Chef. +- Project #1 focused on cloud enablement, working to create an OpenPOWER stack environment to demonstrate Mellanox networking and cloud capabilities. +- On the other end, for project #2, OSU took an open technology approach to cloud, using Linux, OpenStack and KVM to create a platform environment managed by Chef in the university’s Open Source Lab. + +![Innov8 with POWER8_Rice](images/Innov8-with-POWER8_Rice.jpg) [](https://openpowerfoundation.org/wp-content/uploads/2015/02/Innov8-with-POWER8_Rice.jpg) **Rice University** [](https://openpowerfoundation.org/wp-content/uploads/2015/02/Innov8-with-POWER8_Rice.jpg) + +- Rice University has recognized that genomics information consumes massive datasets and that developing the infrastructure required to rapidly ingest, perform analytics and store this information is a challenge. Rice’s initiatives, in collaboration with NVIDIA and Mellanox, were designed to accelerate the adoption of these new big data and analytics technologies in medical research and clinical practice. +- Project #1 focused on exploiting the massive parallelism of GPU accelerator technology and linear programming algorithms to provide deeper understanding of basic organism biology, genetic variation and pathology. +- For project #2, students developed new approaches to high-throughput systematic identification of chromatin loops between genomic regulatory elements, utilizing GPUs to in-parallel and efficiently search the space of possible chromatin interactions for true chromatin loops. + +We are especially proud of the work that each and every one of the students has put into the Innov8 with POWER8 Challenge. As a result of social voting across our communities, it is our pleasure to announce that our 2015 Best in Show recognition goes to project **“Genome Assembly in a Week” from Rice University**! The team leader, Prof. Erez Aiden, and students, Sarah Nyquist and Chris Lui, were on hand at InterConnect2015 to receive their recognition at the Infrastructure Matters zone in the Solution Expo at the Mandalay Bay Convention Center on Wednesday + +![Innov8 with POWER8_2](images/Innov8-with-POWER8_2-225x300.jpg) + +Being able to experience the innovative thinking and enthusiasm from our university participants has been such a privilege. Throughout the semester, each of the universities truly made invaluable contributions in the IT space. Thank you to all who voted and stopped by during the conference! We invite you to stay tuned for more updates on these projects at [Edge2015](http://www-03.ibm.com/systems/edge/). You can follow the teams on our [Tumblr page.](http://powersystemsuniversitychallenge.tumblr.com/) diff --git a/content/blog/innovation-nation-get-ready-openpower-summit-starts-tomorrow.md b/content/blog/innovation-nation-get-ready-openpower-summit-starts-tomorrow.md new file mode 100644 index 0000000..2d1d5df --- /dev/null +++ b/content/blog/innovation-nation-get-ready-openpower-summit-starts-tomorrow.md @@ -0,0 +1,28 @@ +--- +title: "Get Ready to Rethink the Data Center: Welcome to OpenPOWER Summit 2015!" +date: "2015-03-16" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Gordon MacKean, OpenPOWER Chairman_ + +We are here, we made it: the OpenPOWER Foundation’s inaugural Summit, "Rethink the Data Center," starts tomorrow! I wanted to take this opportunity to welcome everyone that will be joining us for the OpenPOWER Summit taking place at NVIDIA’s GPU Technology Conference in San Jose. We’ve got an exciting few days planned for you. Our three-day event kicks off tomorrow morning and goes through Thursday afternoon.  The full schedule is available online at [www.openpowerfoundation.org/2015-summit](http://www.openpowerfoundation.org/2015-summit), but here's a quick rundown of what you won't want to miss: + +- **All Show – Demos and Presentations** in the OpenPOWER Pavilion on the GTC exhibit floor!  Join fellow OpenPOWER members to hear firsthand about their OpenPOWER-based solutions. +- **Wednesday – Morning Keynotes** with myself and OpenPOWER President Brad McCredie where we'll unveil just how quickly our hardware ecosystem is expanding. +- **Wednesday – Afternoon Member Presentations** with several of our Foundation members.   You’ll hear from members such as Mellanox, Tyan, Altera, Rackspace, and Suzhou PowerCore about how they're dialing up the volume on innovation. +- **Thursday – Hands-on Firmware Training Labs** hosted by IBM for building, modifying and testing OpenPOWER firmware with expert guides. +- **Thursday – ISV Roundtable** where we'll discuss OpenPOWER at the software level, including presentations, lightning talks, open discussion and facilitated brainstorming. + +  + +With so much great content and a high level of engagement from our members, the OpenPOWER Summit is clearly the place for other interested parties to get involved and learn how they can join a global roster of innovators rethinking the data center.  Our work is far from over and, given our rapid membership growth, there is no slowdown in sight.  As of today, the Foundation is comprised of over 110 members across 22 countries.  Through eight charted Work Groups, and more on the way, we are providing the technical building blocks needed to equip our community to drive meaningful innovation. + +So to those who are attending, welcome! I look forward to seeing you here in San Jose. And, for those who are unable to join us this week, I invite you to follow the conversation online with #OpenPOWERSummit – there will undoubtedly be lots to talk about this week! + +Yours in collaboration, _Gordon_ + +![OpenPOWER-summit-infographic-v4.5](images/OpenPOWER-summit-infographic-v4.5.jpg) diff --git a/content/blog/innovation-unleashed-openpower-developer-challenge-winners-announced.md b/content/blog/innovation-unleashed-openpower-developer-challenge-winners-announced.md new file mode 100644 index 0000000..fee0d44 --- /dev/null +++ b/content/blog/innovation-unleashed-openpower-developer-challenge-winners-announced.md @@ -0,0 +1,63 @@ +--- +title: "Innovation Unleashed: OpenPOWER Developer Challenge Winners Announced" +date: "2016-10-27" +categories: + - "blogs" +tags: + - "featured" +--- + +_By John Zannos, Chairman, OpenPOWER Foundation_ + +Developers play an integral role within the OpenPOWER ecosystem, and we’re passionate about welcoming them and their creative minds into our growing community. That’s why we decided to embark on our very first [OpenPOWER Developer Challenge](http://openpower.devpost.com/) this past spring. When we made the call for entries, we were thrilled to see over 300 individuals seize this opportunity to tap into the accessible OpenPOWER platform. + +As part of the Challenge, we provided developers access to the [SuperVessel Developer Cloud](https://ptopenlab.com/cloudlabconsole/), along with hands-on guidance to innovative hardware acceleration technologies, advanced development tools, and programming frameworks, enabling them to optimize and accelerate their applications. Working within the OpenPOWER ecosystem, participants were challenged to build or transform their passion projects, finding new ways to accelerate applications in need of a performance boost.  They became contenders as individuals or teams in three courses: **The Open Road Test, The Accelerated Spark Rally, and The Cognitive Cup.** + +And now, after months of forward thinking, collaboration and propelling innovative technologies ever forward, I am excited to announce the four winners of the inaugural OpenPOWER Developer Challenge! + +![6443_social_tile_winners_fb_low](images/6443_Social_Tile_Winners_FB_LOW.gif) + +- **Scaling Up And Out A Bioinformatics Algorithm, by Zaid Al-Ars, Johan Peltenburg, Matthijs Brobbel, Shanshan Ren, and Tudor Voicu from Delft University of Technology** – Winner of the The Open Road Test +- **EmergencyPredictionOnSpark, by Antonio Carlos Furtado from the University of Alberta** – Winner of The Accelerated Spark Rally +- **artNET Genre Classifier, by Praveen Sridhar and Pranav Sridhar** – A two-way tie in The Cognitive Cup +- **DistributedTensorFlow4CancerDetection, by Alexander Kalinovsky,** **Andrey Yurkevich, Ksenia Ramaniuk, and Pavel Orda from Altoros Labs** – A two-way tie in The Cognitive Cup + +We spoke with each of the winners ahead of the OpenPOWER Summit Europe for insight on their experience and key takeaways from our inaugural Challenge. Here’s what our winning developers learned and what inspired their innovative applications. + +## [**Scaling Up And Out A Bioinformatics Algorithm**](http://devpost.com/software/scaling-up-and-out-a-bioinformatics-algorithm) + +In addition to further developing an application that advances precision medicine, the engineers at Delft University of Technology acquired valuable skills both on a technical and team building level. As the team continues to work to further build the application, they are optimistic that working with the OpenPOWER Foundation will create a valuable network of partners to further collaborate and grow. + +**What was the inspiration for your application?** _For a couple of years now, our group in the TUDelft has been actively working to address the computational challenges in DNA analysis pipelines resulting from Next Generation Sequencing (NGS) technology, which brings great opportunities for new discoveries in disease diagnosis and personalized medicine. However, due to the large size of used datasets, it would take a prohibitively long time to perform NGS data analysis. Our solution is targeted to combine scaling with high-performance computer clusters and hardware acceleration of genetic analysis tools to achieve an efficient solution. – Zaid Al-Ars, Assistant Professor at the Delft University of Technology and co-Founder of Bluebee_ + +## [**EmergencyPredictionOnSpark**](http://devpost.com/software/emergencypredictiononspark) + +Antonio Carlos Furtado was able to develop an emergency call prediction application through the OpenPOWER Developer Challenge, bringing himself up to speed with the OpenPOWER environment for the first time and then trying out different approaches to implementing his Big Data Analytics application. He is interested in exploring new features in deep learning and excited to get a glimpse of what is new in terms of high-performance computing at SC16. + +**What did you learn from the Challenge?** + +_I learned more from the OpenPOWER Developer Challenge than what I usually learn after taking a four-month course at the university. The most useful thing I learned was probably the functional programming paradigm. As with most programmers, I am more familiar with the imperative programming paradigm. At some point during the Challenge, I realized that I would have to get myself familiarized with Scala programming language and functional programming to get my project completed in time. The main goal of the project was to use Apache Spark to scale the training of a deep learning model across multiple nodes. When learning about Apache Spark, I found that not only are there more resources for Scala, but it is also the best way to use it. I enjoyed programming in Scala so much that I continue learning it and using it even today. – Antonio Carlos Furtado, MSc Student at University of Alberta and Developer at wrnch_ + +## [**artNET Genre Classifier**](http://devpost.com/software/artnet-genre-classifier) + +Developers Praveen Sridhar and Pranav Sridhar were intrigued by the differentiated compute facilities provided to applicants. Initially, joining the Challenge was about testing the technologies provided on their art genre classifier; however, it transformed into absorbing and understanding deep learning through participation, which is imperative for long-term application development. + +**Why did you decide to participate in the OpenPOWER Developer Challenge?** + +_I was fascinated by the fact that such awesome compute facilities were being provided by OpenPOWER for Developer Challenge participants. I initially just wanted to try out what was being provided, but once I realized its power, there was no stopping. I practically learned Deep Learning by participating in this Challenge. – Praveen Sridhar, Freelance Developer and Data Analyst_ + +## [**DistributedTensorFlow4CancerDetection**](http://devpost.com/software/distributedtensorflow4cancerdetection) + +Altoros Labs found that combining the rapidly developing challenge space involving automated cancer detection using TensorFlow with the robust and proven platform offered through the OpenPOWER Developer Challenge led to amazing results. The developers are expecting the beta version of the application to be launched in a few months, and Altoros Labs will continue to utilize the OpenPOWER community to strengthen the application. + +**Why did you decide to participate in the OpenPOWER Developer Challenge?** + +_Exploring TensorFlow has been one of our R&D focuses recently. We also knew that POWER8 technology is good at enhancing big data computing systems. Our team liked the idea of bringing the two solutions together, and the Challenge was a great opportunity to do so. Even though it was the first time we tried to participate in this kind of challenge, we got promising results and are going to continue with experiments. – Ksenia Ramaniuk, Head of Java & PHP Department at Altoros_ + + Putting advanced tools at the fingertips of some of the most innovative minds is powering the growing open technology ecosystem, and the OpenPOWER Foundation is pleased to be a part of the progression. We’ll continue to place great importance on encouraging developer-focused collaborations and innovations that are capable of impacting the industry. + +## Help Build the Next Great OpenPOWER Application + +Join the Grand Prize winners with IBM and OpenPOWER at [SC16](http://sc16.supercomputing.org/) in Salt Lake City, November 15-19. Hear first-hand their experiences and see full demos of their winning applications at the IBM booth. + +Are you ready to get started on your OpenPOWER application? Check out our new Linux Developer Portal at [https://developer.ibm.com/linuxonpower/](https://developer.ibm.com/linuxonpower/). Think your application idea is good enough to win the OpenPOWER Developer Challenge? Then be sure to follow [http://openpower.devpost.com](http://openpower.devpost.com) to get updates on next year’s Challenge! diff --git a/content/blog/inspur-power-systems-and-yadro-join-openpower-foundation-as-platinum-members.md b/content/blog/inspur-power-systems-and-yadro-join-openpower-foundation-as-platinum-members.md new file mode 100644 index 0000000..0214f44 --- /dev/null +++ b/content/blog/inspur-power-systems-and-yadro-join-openpower-foundation-as-platinum-members.md @@ -0,0 +1,47 @@ +--- +title: "Inspur Power Systems and Yadro Join OpenPOWER Foundation as Platinum Members" +date: "2018-11-02" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +# **Inspur Power Systems and Yadro Join OpenPOWER Foundation as Platinum Members** + +PISCATAWAY, N.J.--(BUSINESS WIRE)--The OpenPOWER Foundation announced today two new Platinum-level members: [Inspur Power Systems](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.inspurpower.com%2F&esheet=51893639&newsitemid=0&lan=en-US&anchor=Inspur+Power+Systems&index=1&md5=fe4e1a4bc11fd7ed1f5dbd15f1ef17ce) and [Yadro](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fyadro.com%2F&esheet=51893639&newsitemid=0&lan=en-US&anchor=Yadro&index=2&md5=592bce6ca137c5dfaa2ae4b4e1f276da). + +“Since our founding in 2013, compute infrastructure deployment has advanced and evolved,” said Bryan Talik, president, OpenPOWER Foundation. “Having Inspur Power Systems and Yadro participate in the foundation at the Platinum level signifies the growth of open innovation in key markets around the world.” + +**Inspur Power Systems: Infrastructure Innovation** + +Inspur Power Systems strives to develop a new generation of OpenPOWER server products for data centers facing the “cloud intelligence” era. The company has released three [OpenPOWER servers](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.inspursystems.com%2Fopen-platforms%2F&esheet=51893639&newsitemid=0&lan=en-US&anchor=OpenPOWER+servers&index=3&md5=ae226ae8059c2b0f4f781cfabb049ff1) in 2018 that provide improved performance and reduced total cost of ownership. + +“Inspur Power Systems is committed to research, development and production of server products based on POWER technology,” said Jimmy Zheng, OpenPower server product manager, Inspur Power Systems. “We will focus on improving the server ecosystem and empowering customers in the age of artificial intelligence, machine learning and cloud servers.” + +**Yadro: Engineering Excellence** + +Yadro is committed to transforming the way enterprises address IT challenges. Its [VESNIN server](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fyadro.com%2Fproducts%2Fvesnin&esheet=51893639&newsitemid=0&lan=en-US&anchor=VESNIN+server&index=4&md5=750ec3f092d71f53724b7ffeade5709c) is the world’s first OpenPOWER enterprise-class, high-performance server designed for data-intensive applications, and supports the growing demand for efficient scale-out installations. + +“We believe OpenPOWER will be a core technology as infrastructure evolves to enable cognitive computing,” said Artem Ikoev, co-founder and CTO, Yadro. “We’re excited to increase our partnership with the OpenPOWER Foundation and accelerate our efforts to help enterprises transform their IT capabilities.” + +Both Ikoev and Zheng were appointed to the [OpenPOWER Foundation Board of Directors](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fopenpowerfoundation.org%2Fabout-us%2Fboard-of-directors%2F&esheet=51893639&newsitemid=0&lan=en-US&anchor=OpenPOWER+Foundation+Board+of+Directors&index=5&md5=721a9d9ba19388cc8aeb9c0fc9a59e6e) in October, Ikoev as its Chair and Zheng as a Director. + +The OpenPOWER Foundation welcomes members of all levels. Visit [www.openpowerfoundation.org/membership](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.openpowerfoundation.org%2Fmembership&esheet=51893639&newsitemid=0&lan=en-US&anchor=www.openpowerfoundation.org%2Fmembership&index=6&md5=948c935ecc71bdcb27bc5412c1c1a257)for more information on membership and its benefits. + +**About OpenPOWER Foundation** + +The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the Foundation is to create an open ecosystem built around the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry. + +Founded in 2013 by Google, IBM, NVIDIA, Tyan and Mellanox, the organization has grown to 350+ members worldwide from all sectors of the High Performance Computing ecosystem. For more information on OpenPOWER Foundation, visit [www.openpowerfoundation.org](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.openpowerfoundation.org%2F&esheet=51893639&newsitemid=0&lan=en-US&anchor=www.openpowerfoundation.org&index=7&md5=d146a1d547471814cf70d2efe49d94a7). + +## Contacts + +**Media:** OpenPOWER Foundation Joni Sterlacci, 732-562-5464 [j.sterlacci@ieee.org](mailto:j.sterlacci@ieee.org) + +[![](https://mms.businesswire.com/bwapps/mediaserver/CopyApprovalViewMedia?token=1109464809&key=EZO8Xv8_pRzsAgUUzO8hfqzjphthuYnJhxLIuqXhFYka "http://www.openpowerfoundation.org")](http://www.openpowerfoundation.org/) + +## Contacts + +**Media:** OpenPOWER Foundation Joni Sterlacci, 732-562-5464 diff --git a/content/blog/inspur-power-systems-announce-the-fp5468g2.md b/content/blog/inspur-power-systems-announce-the-fp5468g2.md new file mode 100644 index 0000000..3922410 --- /dev/null +++ b/content/blog/inspur-power-systems-announce-the-fp5468g2.md @@ -0,0 +1,20 @@ +--- +title: "Inspur Power Systems announce the FP5468G2" +date: "2019-11-28" +categories: + - "blogs" +--- + +SC19 in Denver, Colorado saw OpenPOWER Foundation Platinum member Inspur Power Systems announcing the latest in their range of OpenPOWER Servers. + +The FP5468G2 is targeted at Deep Learning and AI Cloud applications and packs an impressive set of specifications into a 4U, 19" Rack chassis; + +- Two POWER9 processors providing up to 44 cores/176 threads +- Four PCIe Gen 4 x16 slots +- Up to 1TB of DDR4 RAM in 16 DIMM slots +- 8xV100 or 16sT4 Nvidia GPU +- Up to 24 3.5" drives, 6 of which support U.2 NVMe + +![](images/FP5468G2-20191119-225x300.jpg) + +More information is available in Inspur's [press release.](https://openpowerfoundation.org/wp-content/uploads/2019/11/IPS-FP5468G2-20191121-Final.pdf) diff --git a/content/blog/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects.md b/content/blog/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects.md new file mode 100644 index 0000000..be81605 --- /dev/null +++ b/content/blog/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects.md @@ -0,0 +1,70 @@ +--- +title: "Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI" +date: "2015-10-05" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "nvidia" + - "mellanox" + - "department-of-energy" + - "coral" + - "featured" + - "hpc" + - "capi" + - "acceleration" + - "capi-series" +--- + +_By Scot Schultz Director of HPC and Technical Computing, Mellanox_ + +## Business Challenge + +Some computing jobs are so large that they must be split into pieces and solved in parallel, distributed via the network across a number of computing nodes. We find some of the world’s largest computing jobs in the realm of scientific research, where continuous advancement will require extreme-scale computing with machines that are 500-to-1000 times more capable than today’s supercomputers. As researchers constantly refine their models and push to increased resolutions, the demand for more parallel computation and advanced networking capabilities is paramount. + +## Computing Challenge + +Efficient high-performance computing systems require [high-bandwidth, low-latency connections](http://bit.ly/1Lctmnq) between thousands of multi-processor nodes, as well as high-speed storage systems. As a result of the ubiquitous data explosion and the ascendance of Big Data, especially unstructured data, today’s systems need to move enormous amounts of data as well as perform more sophisticated analysis. + +The network now becomes the critical element of gaining insight from today’s massive flows of data. + +## Solution + +Only Mellanox delivers an industry standards based solutions with advanced native hardware acceleration engines, but leveraging the latest advancement from IBM’s OpenPOWER architecture takes performance to whole new level. + +Already deployed in over 50% of the world’s most powerful super computing systems, Mellanox’s high speed interconnect solutions are proven to deliver the highest scalability, efficiency, and unmatched performance for HPC systems. The latest [Mellanox EDR 100Gb/s interconnect architecture](http://bit.ly/1Lctmnq) includes native support for one of the newest innovations brought forth by OpenPOWER, [the Coherent Accelerator Processor Interface (CAPI)](http://ibm.co/1QVeo58). + +[Mellanox 100Gb/s ConnectX®-4 architecture with native support for CAPI](http://bit.ly/1Lctmnq) is capable of handling communications of massive parallelism. By delivering up to 100Gb/s of reliable, zero-loss connectivity, ConnectX-4 with CAPI provides an optimized platform for moving enormous volumes of data. With much tighter integration between the Mellanox high-performance interconnect and the processor, POWER-based systems can rip through high volumes of data and bring compute and data closer together to derive greater insights. Mellanox ConnectX-4 can be leveraged for 100Gb CAPI-attached InfiniBand, Ethernet, or storage. + +  + +[![CAPI Interconnects with Mellanox Data Flow](images/CAPI-Mellanox-Interconnect-Data-Flow-1024x607.jpg)](http://bit.ly/1Vz7KTC) + +CAPI also simplifies the memory management between interconnect and CPU – which results in reduced overhead, higher performance and increased scalability. Because CAPI provides a level of integration that removes additional latency compared to platforms featuring traditional PCI-Express bus semantics, the Mellanox interconnect can move data in and out of the system with even greater efficiency. + +Back to tackling the world’s toughest scientific problems –Mellanox ConnectX-4 EDR 100Gb/s “Smart” interconnect technology and IBM’s POWER architecture with CAPI can help. [Oak Ridge National Laboratory](http://1.usa.gov/1VxO4EN) and [Lawrence Livermore National Laboratory](http://1.usa.gov/1M9X2hi) for example, have chosen solutions utilizing OpenPOWER designs developed by [Mellanox](http://bit.ly/1LruDJ5), [IBM](http://ibm.co/1Nf4jSK), and [NVIDIA](http://bit.ly/1QThDtP)– for the Department of Energy’s next generation Summit and Sierra supercomputer systems. Summit and Sierra will deliver raw computing power at more than 100 petaflops at peak performance, which will make them the most powerful computers in world. + +From innovation in nanotechnologies, climate research, medical research and discovering renewable energies, Mellanox and members of the OpenPOWER ecosystem are leading innovations in high performance computing. + +## Learn more about Mellanox 100Gb/s and CAPI + +Mellanox CAPI attached interconnects are suitable for the largest deployments, but they are also accessible for more modest clusters, clouds, and commercial datacenters. Here are a few ways to get started. + +- [Learn more about Mellanox ConnectX-4 100Gb Adapters](http://bit.ly/1RpVW5w) +- [Read the Mellanox ConnectX-4 Product Brief](http://bit.ly/1LruDJ5) +- [Follow a tutorial to get acquainted with your ConnectX-4 adapter on Linux](http://bit.ly/1FQWXSH) +- [Download a whitepaper on SwitchIB, the switch architecture for 100Gb interconnects](http://bit.ly/1Lctmnq) +- [Engage with others using Mellanox 100Gb technology and find solutions in the Developer Community](http://bit.ly/1RpVW5w) + +Keep coming to see blog posts from IBM and other OpenPOWER Foundation partners on how you can use CAPI to accelerate computing, networking and storage. + +- [CAPI Series 1: Accelerating Business Applications in the Data-Driven Enterprise with CAPI](https://openpowerfoundation.org/blogs/capi-drives-business-performance/) +- [CAPI Series 2: Using CAPI and Flash for larger, faster NoSQL and analytics](https://openpowerfoundation.org/blogs/capi-drives-business-performance/) +- [CAPI Series 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/) + +* * * + +**_About Scot Schultz_** + +_[![Scot Schultz, Mellanox](images/ScotSchultz.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/10/ScotSchultz.jpg)Scot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies. Joining the Mellanox team in March 2013 as Director of HPC and Technical Computing, Schultz is 25-year veteran of the computing industry. Scot currently maintains his role as Director of Educational Outreach, founding member of the HPC Advisory Council and of various other industry organizations. Follow him on Twitter: [@ScotSchultz](https://twitter.com/ScotSchultz)_ diff --git a/content/blog/international-workshop-openpower-hpc.md b/content/blog/international-workshop-openpower-hpc.md new file mode 100644 index 0000000..8fc1cca --- /dev/null +++ b/content/blog/international-workshop-openpower-hpc.md @@ -0,0 +1,36 @@ +--- +title: "Call for Papers: International Workshop on OpenPOWER for HPC" +date: "2018-04-13" +categories: + - "blogs" +tags: + - "openpower" + - "hpc" + - "openpower-foundation" + - "isc-high-performance-conference" +--- + +In collaboration with the ISC High Performance Conference, the OpenPOWER Foundation is organizing the third HPC workshop in Frankfurt, Germany. + +These workshops have always been a place for experts from different scientific and engineering background to come together and identify common ground and discuss how they are using OpenPOWER technologies. + +The organizers from Oak Ridge National Lab (ORNL) and Juelich Supercomputing Centre are calling for papers describing the latest advances in OpenPOWER. + +These papers should address challenges in system architecture, networking, memory designs, exploitation of accelerators, programming models and porting applications in machine learning, data analytics, modelling and simulation. Early experience using IBM POWER9 processors and NVIDIA Volta GPUs are of interest. + +Topics of interest include, but are not limited to: + +- Porting applications experiences to OpenPOWER nodes to exploit its HPC and data analytics capabilities +- Designs and use models for GPU and FPGA-accelerated applications +- Co-designing the HPC software stack +- Programming models for HPC and data analytics, +- Tools eco-system to improve productivity on OpenPOWER architectures +- System architectural choices +- Low level communication APIs, I/O frameworks +- Runtime environments and schedulers +- Power-aware computing and power optimization studies for OpenPOWER +- Benchmarking and validation studies on OpenPOWER architectures + +If you are interested in having your paper submitted, the deadline is **April 22, 2018.**  All contributions are planned to be published in the ISC'18 Joint Workshop Proceeding Volume. + +[Papers should be uploaded here](https://easychair.org/conferences/?conf=iwoph18). diff --git a/content/blog/introducing-ibm-power10-functional-simulator.md b/content/blog/introducing-ibm-power10-functional-simulator.md new file mode 100644 index 0000000..bbe253c --- /dev/null +++ b/content/blog/introducing-ibm-power10-functional-simulator.md @@ -0,0 +1,56 @@ +--- +title: "Introducing IBM® POWER10 Functional Simulator" +date: "2020-10-02" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "featured" + - "openpower-summit" + - "openpower-foundation" + - "hardware" + - "linux" + - "ibm-power" + - "power10" + - "functional-simulator" +--- + +By [Brad Thomasson](https://www.linkedin.com/in/bradford-thomasson-9b89044/), Cognitive Software Engineer, IBM + +After announcing the newest [IBM POWER10 processor at the Hot Chips 2020 in August](https://newsroom.ibm.com/2020-08-17-IBM-Reveals-Next-Generation-IBM-POWER10-Processor), our IBM Cognitive Systems Simulation team is now proud to present the [IBM POWER10 Functional Simulator](https://www14.software.ibm.com/webapp/set2/sas/f/pwrfs/pwr10/home.html). + +This publicly available simulation environment is designed to educate developers, facilitate porting of existing Linux applications to the POWER10 architecture, and enable new ones to be created. + +\[caption id="attachment\_7638" align="aligncenter" width="1024"\]![](images/POWER10-1024x683.jpeg) _A close-up of the first commercialized 7nm processor, the IBM POWER10. Photo credit: Connie Zhou for IBM_\[/caption\] + +  + +This simulator provides enough POWER10 processor complex functionality to allow the entire software stack to execute. This includes loading, booting and running a little endian Linux environment. + +Note that while the IBM POWER10 Functional Simulator serves as a full instruction set simulator for the POWER10 processor, it may not model all aspects of the IBM Power Systems POWER10 hardware and thus may not exactly reflect the behavior of the POWER10 hardware. + +Features/support available in the simulator include: + +- POWER10 hardware reference model +- Full instruction set simulator for Power ISA as implemented in POWER10 +- Models complex SMP effects +- Architectural modeled areas: + - Functional behavior of all units (Load/Store, FXU, FPU, DFP, VMX, VSX, etc.) + - Exceptions and Interrupt handling + - Address translation, both Paravirtualized HPT and two level Radix Tree + - Memory and basic translation cache modeling (SLBs, TLBs, ERATs) + - Instruction Prefix Support + - VSX Matrix-Multiply Assist (MMA) Instructions for AI + - Reduced-Precision instructions to accelerate matrix multiplication + - Copy-Paste Facility + - New AIL/HAIL programmability feature for Linux/Hybrid cloud +- Linux and Hypervisor development and debug platform +- TCL command-line interface provides: + - Custom user initialization scripts + - Processor state control for debug: Step, Run, Cycle run-to, Stop, etc. + - Register and Memory R/W interaction + +  + +Our team is very open to feedback and questions. For all technical inquiries and suggestions, please reach out to our Cognitive Systems Simulation team through the [Customer Connect Support Channel](https://login.ibm.com/oidc/sps/auth?Target=https%3A%2F%2Flogin.ibm.com%2Foidc%2Fendpoint%2Fdefault%2Fauthorize%3FqsId%3D2d9cf726-59b4-4d0e-b4f0-f04c029876d7%26client_id%3DMzUxMDEwNzQtZTU2Ny00&client_id=MzUxMDEwNzQtZTU2Ny00) or consider joining the [OpenPOWER Foundation Slack workspace](https://join.slack.com/t/openpowerfoundation/shared_invite/zt-9l4fabj6-C55eMvBqAPTbzlDS1b7bzQ). diff --git a/content/blog/introducing-ibm-power9-functional-simulator.md b/content/blog/introducing-ibm-power9-functional-simulator.md new file mode 100644 index 0000000..927e426 --- /dev/null +++ b/content/blog/introducing-ibm-power9-functional-simulator.md @@ -0,0 +1,46 @@ +--- +title: "Introducing IBM® POWER9 Functional Simulator" +date: "2018-02-09" +categories: + - "blogs" +tags: + - "openpower-foundation" + - "power-systems" + - "power9" + - "power9-functional-simulator" +--- + +By Leif Reinert, Bradford Thomasson and Saif Abrar + +As we launch POWER9, our IBM Cognitive Systems Simulation team is proud to introduce the POWER9 Functional Simulator as a new publicly available simulation environment. [Click here](https://www-304.ibm.com/webapp/set2/sas/f/pwrfs/pwr9/home.html) to download the POWER9 Functional Simulator from our website. + +By implementing the functional behavior of all core units, as well as the generalized simulation of the memory, disk, network, and system console, our simulator enables execution of the entire software stack including loading, booting and running a little-endian Linux environment on a local x86 host. Using the TCL command-line interface then allows users to customize system initialization and processor state control. + +Simulating the full Power ISA instruction set as implemented in POWER9, this tool serves as a vehicle for education, new application development, and porting of existing Linux applications to the POWER9 architecture. + +Features/support available in the simulator include: + +- POWER9 hardware reference model +- Full instruction set simulator for Power ISA as implemented in POWER9 +- Models complex SMP effects +- Architectural modeled areas: + - Functional behavior of all units (Load/Store, FXU, FPU, VMX, VSX, etc.) + - Exceptions and Interrupt handling + - Address translation, both Paravirtualized HPT and two-level Radix Tree + - Memory and basic translation cache modeling (SLBs, TLBs, ERATs) +- Linux and Hypervisor development and debug platform +- TCL command-line interface provides: + - Custom user initialization scripts + - Processor state control for debug: Step, Run, Cycle run-to, Stop, etc. + - Register and Memory R/W interaction + +We have already seen how this capability, in addition to support for the Software Development Kit (SDK) for Linux on Power, has provided OpenPOWER partners with a powerful set of features for development projects such as: + +- Optimization of compilers +- Testing of open-source firmware and upstream Linux kernels +- Development of execution-driven performance models +- Creation of early software prototyping environments + +The POWER9 Functional Simulator’s instruction tracing feature and its companion tools allow users to optimize their code by analyzing the behavior on a microarchitectural level. User-controlled instruction traces, driven by workloads executed on the POWER9 Functional Simulator, can be digested by post-processing tools that generate a cycle accurate representation of the POWER9’s pipeline stages. Each instruction can be analyzed throughout the pipeline using graphical and statistical tools. This provides users with all the details necessary to optimize their code and maximize performance. + +Our team is very open to feedback and questions. For all technical inquiries and suggestions, please reach out to our Cognitive Systems Simulation team through the [Customer Connect Support Channel](https://www.ibm.com/technologyconnect/issuemgmt/home.xhtml). diff --git a/content/blog/introducing-openpower-developer-tools.md b/content/blog/introducing-openpower-developer-tools.md new file mode 100644 index 0000000..9580ab7 --- /dev/null +++ b/content/blog/introducing-openpower-developer-tools.md @@ -0,0 +1,14 @@ +--- +title: "Introducing OpenPOWER Developer Tools – A One Stop Resource for Porting and Building OpenPOWER Compatible Solutions" +date: "2015-02-10" +categories: + - "blogs" +--- + +_by Jeff Scheel, IBM Linux on Power Chief Engineer_ + +Since its inception, the OpenPOWER Foundation has been dedicated to fostering a collaborative environment to drive meaningful hardware and software innovation.  As part of that commitment, OpenPOWER today launched a valuable new asset for developers:  [OpenPOWER Developer Tools](https://openpowerfoundation.org/technical/technical-resources/openpower-developer-tools/). + +Available on the [Technology Resources](https://openpowerfoundation.org/technical/technical-resources/) section of the [OpenPOWER website](https://openpowerfoundation.org/), OpenPOWER Developer Tools is a quintessential starting place for developers looking to participate in OpenPOWER's growing ecosystem.  The tool kits and other resources – made available by our members – include a variety of hardware, software and other technical resources that will enable developers to more quickly leverage POWER's open architecture to build solutions that follow OpenPOWER's design concept. + +Current tools and technical assets available include: Tyan's OpenPOWER customer reference system, Nallatech's CAPI Developer Kit, a software developer toolkit for Linux on POWER, access to IBM's Power Development Cloud and more.  And, with a membership over 90 members and counting, we expect to post additional tools on an ongoing basis ... so check back often! diff --git a/content/blog/introducing-the-falcon-ii-the-worlds-first-pcie-4-0-composable-ai-box.md b/content/blog/introducing-the-falcon-ii-the-worlds-first-pcie-4-0-composable-ai-box.md new file mode 100644 index 0000000..8c890de --- /dev/null +++ b/content/blog/introducing-the-falcon-ii-the-worlds-first-pcie-4-0-composable-ai-box.md @@ -0,0 +1,74 @@ +--- +title: "Introducing the Falcon II, the World’s First PCIe 4.0 Composable AI Box" +date: "2019-06-17" +categories: + - "blogs" +tags: + - "featured" +--- + +[**Yomi Yeh**](https://www.linkedin.com/in/yomi-yeh-b70764b4/?originalSubdomain=tw)**, product manager, H3 Platform** + +Computer systems are about to get a whole lot faster. For almost two decades, PCI Express has been the data interconnect standard, twisting together GPU, storage, and networking within systems from PCs to High Performance Computing systems. This year, starting with the high end of the market, a transition will begin toward systems based on PCI Express 4.0. + +PCIe Gen4 offers cutting-edge system performance that doubles the interconnect bandwidth over PCIe Gen3. It provides 256 GT/s, or 31.5 GB/s in a x16 PCIe Gen4 slot, relieving system bottlenecks between CPU root complexes and accelerators, storage and IO devices. This speed increase will bring significant benefit across the full range of applications such as AI and machine learning, scientific simulation, high resolution visualization and rendering. + +AI is one of the most booming industries in computer science. But companies are facing challenges in implementation, as AI requires a huge resource of accelerators for computing; professional accelerators for AI are very expensive; and different applications require different GPU allocations and ratios of CPU to GPU. + +To help solve this implementation problem, H3 Platform, a pioneer of PCI Express switch, is introducing the world’s first PCIe 4.0 composable AI box with IBM Power9 systems, the **Falcon II**, to provide interconnect at PCIe 4.0 bandwidth for both the host side and device side. an end to end PCIe 4.0 performance. The Falcon II features four Gen4 x16 host connectors and sixteen Gen4 x16 double width PCIe 4.0 slots for GPUs, NVMe drives or network interfaces. Each Falcon II consists of two drawers along with eight double-width GPUs to create the next-generation HPC acceleration platform. The Falcon II can be connected to Power9 with PCIe 4.0 connections to aggregate GPU traffic from several Gen3 devices; it also provides bandwidth to the server at Gen4 speed. The box is Gen4-ready and will support all Gen 4 PCIe devices as they become available later this year. + +A quick look at the outlook (Figure 1) and key features: + +- 4U 19” disaggregated compute accelerator +- Support up to sixteen PCIe 4.0 GPGPU +- Support up to four Host servers +- Double Performance to Existing PCIe 3.0 Expansion Boxes +- AI Composability + +\[caption id="attachment\_6900" align="alignleft" width="985"\]![](images/Figure-1-H3.png) Figure 1. Outlook of Falcon II\[/caption\] + +  + +  + +  + +  + +  + +  + +  + +  + +  + +To demonstrate the performance of PCIe Gen4, we are using a PCIe 4.0 NVMe SSD as the PCIe device to install in the Falcon II and connect it to Power9 (IBM Power Systems LC922). Please refer to Figure 2 for the architecture. + +\[caption id="attachment\_6901" align="alignleft" width="939"\]![](images/Figure-2-H3.png) Figure 2. Connecting Power9 to the Falcon II, and Assign SSD to Power9\[/caption\] + +  + +  + +  + +  + +  + +  + +Check the link speed of Power9 and PCIe Gen 4 NVMe SSD on Ubuntu 1804 (Figure 3). It shows “Speed 16GT/s, Width x16” at Power9 root complex port; and “Speed 16GT/s, Width x 4” at PCIe Gen4 NVMe SSD. + +\[caption id="attachment\_6902" align="aligncenter" width="920"\]![](images/Figure-3-H3.png) Figure 3. Link Speed of P9 and PCIe Gen4 NVMe SSD\[/caption\] + +Use FIO for the testing benchmark. The test was run at random read 4K. We get 4680MB/s (Figure 4) which is better than the theoretical bandwidth of PCIe 3.0 x4 (3940MB/s). + +\[caption id="attachment\_6903" align="aligncenter" width="922"\]![](images/Figure-4-H3.png) Figure 4. Performance of PCIe Gen4 NVMe SSD at Random Read 4K\[/caption\] + +The Falcon II extends the expansion ability of Power9 with 16 PCIe Gen4 ready expansion slots that double the interconnect bandwidth of Power9 to aggregate the traffic of PCIe Gen3 devices. Once the PCIe Gen4 GPU hits the market, the Falcon II will bring further improvement of IO transmission. + +If you are attending [ISC in Frankfurt](https://www.isc-hpc.com/) this year, stop by the OpenPOWER Foundation booth (E-1054) to see the technology firsthand; and if you would like to know more information, please contact us at [sales@h3platform.com](mailto:sales@h3platform.com). diff --git a/content/blog/introducing-the-little-endian-openpower-software-development-environment-and-its-application-programming-interfaces.md b/content/blog/introducing-the-little-endian-openpower-software-development-environment-and-its-application-programming-interfaces.md new file mode 100644 index 0000000..dee70e8 --- /dev/null +++ b/content/blog/introducing-the-little-endian-openpower-software-development-environment-and-its-application-programming-interfaces.md @@ -0,0 +1,52 @@ +--- +title: "Introducing the Little-Endian OpenPOWER Software Development Environment and its application programming interfaces" +date: "2015-01-16" +categories: + - "blogs" +--- + +Presented by: [Michael Gschwind](https://www.linkedin.com/profile/view?id=7012740&authType=NAME_SEARCH&authToken=oq4A&locale=en_US&srchid=32272301421438067339&srchindex=1&srchtotal=9&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438067339%2CVSRPtargetId%3A7012740%2CVSRPcmpt%3Aprimary) + +Over the past three decades, the Power Architecture has been an important asset in IBM’s systems strategy.  During the time, Power-based systems powered desktops, technical workstations, embedded devices, game consoles, supercomputers and commercial UNIX servers. + +The ability to adapt the architecture to new requirements has been key to its longevity and success.  Over te past several years, a new class of computing solutions has emerged in the form of dedicated data center scale computing platforms to power services such as search and social computing. Data center-level applications most often involve data discovery and/or serving from large repositories.  Applications may either be written in traditional object-oriented languages such as C++, or in new dynamic scriptiong languages such as JavaScript, PHP, Python, Ruby, etc. + +Because many datacenters use custom-designed servers, these applications have suffered from lock-in into merchant-silicon processors optimized for desktop environments.  The new Open Power consortium creates an alternative to the x86 lock in, by creating an open source ecosystem that offers ease of porting from processors currently used in datacenters. + +Unix and Linux applications have offered great portability in the past, but required some investment to avoid processor-specific code patterns.  To simplify porting of applications to the new Open Power environment, we reengineered the Open Power environment to simplify porting of software stacks and entire systems. + +One particularly pervasive dependence is  byte ordering of data.  Byte ordering affects both the layout of data in memory, and of disk-based data repositories. While Power had supported both big-endian (most significant byte first) and little-endian (least significant byte first) data orderings, common Power environments have always used big-endian ordering.  To address endian-ness, Power8 was defined to offer the same high performance for big- and little-endian applications.  Building on that hardware capability, Open Power defines a new little-endian execution environment.  The new little-endian environment exploits little-endian execution.  In addition compiler built-ins functions handle transformation of data orderings that cannot be readily changed with a endian configuration switch, such as the ordering of vector elements in the SIMD execution units. + +Introducing a new data layout necessarily breaks binary compatibility which created an opening to create a new Application Binary Interface governing the interoperation of program modules, such as data layout and function calling conventions. + +To respond to changes in workload behavior and programming patterns, we co-optimized hardware and software to account for evolution of workloads since the original introduction of Power: 1.        Growth in memory and data sizes: In modern applications, external variables are accessed via data dictionaries (GOT or TOC) holding the address of all variables.  The original IBM GOT to access global variables was restricted to 64KB, or 8000 variables, per module reflecting the ability to use 16bit offsets in Power load and store instructions, which was becoming a limitation for enterprise applications and complicated the application build process and/or degraded performance. + +Power8 can combine multiple instructions into a single internal instruction with a large 4GB offset  We introduced a new “medium code model” in the ABI which takes advantage of displacement fusion to support GOTs with up to 500 million variables.  By default, compilers and linkers generate fusable references for the medium code model. + +2\.        Accelerate data accesses by “inlining” data in the dictionary: With the growth in dictionary size enabled by displacement fusion, it becomes possible to include data objects in the GOT rather than only including a pointer to the object.  This improves reduces the amount of accesses necessary to retrieve application data from emmory and improves cache locality. + +3\.         Eliminate penalties for data abstraction: To make object oriented programs as efficient as their FORTRAN equivalent, we expanded the passing of input and output parameters  in registers to classes.  Classes can now use up to eight floating point or vector registers per input or output parameter.  This makes it possible to code classes for complex numbers, vertices, and other abstract types as efficient as built-in types. + +4\.        Accelerate function calls: Object oriented programming has led to a marked shift in programming patterns in application programs with the average size of application programs dropping from millions of instructions per function for FORTRAN codes to tens of instructions in object oriented applications.  Consequently, reducing the fixed cost per function invocation is more important than before. + +Previously, the Power ABI made initializing a callee’s entire environment the responsibility of glue code hidden from programmers and compilers on cross module calls.  To ensure environments are properly initialized for all languages, the generated glue code had to conservatively assume for these functions that addressability must be established for the new module.  Linux requires all externally visible functions to be resolved at runtime, extending the cost of dynamic linking to most functions that will ultimately resolve to calls within a module. + +The new ABI makes the called function responsible to set up its own environment.  In addition, each function can have two entry points, one for local calls from within the same module to skip initialization code when no setup is necessary (this local entry point can be used either for direct calls, or via the dynamic linkage code). + +5\.        Simplify and accelerate function pointer calls: The previous Power ABI had focused on providing functional completeness by representing each function pointer as a data structure (sometimes called a “function descriptor”) encapsulating static and dynamic environments with 3 pointers for instruction address, static and dynamic environment, to support a broad and diverse set of languages, including FORTRAN, Algol, PL/1, PL.8, PL.9, Pascal, Modula-2, and assembly.  Using such a function pointer structure, each caller could set up the environment for the callee when making a function pointer call. + +Alas, with the introduction of self-initializing functions and no practical need to optimize performance for Pascal and Modula-2, the function descriptor offers little advantages, but incurs three extra memory references that must be made, and that are in the critical path of function call and data accesses.  Thus, the new ABI represents function pointers as the instruction address of a function’s first instruction. + +In addition to these ABI improvements, the new OpenPOWER software environment also includes two new SIMD vector programming API optimized for the little-endian programming environment that uses fully little-endian conventions for referencing data structures and vector elements within the Power SIMD vector processing unit.  Where necessary, the compiler translates these new little-endian conventions to the underlying big-endian hardware conventions.  This is particularly useful to write native little-endian SIMD vector applications, or when porting SIMD vector code from other little-endian platforms. + +In addition, the compilers can also generate code for big-endian vector conventions but using little-endian data – an environment that is particularly useful for porting libraries originally developed for big-endian Power, such as IBM’s tuned mathematics libraries which can support both big- and little-endian environments with a common source code. + +In order to simplify programming and enable code portability, we define two SIMD vector programming models: a natively little-endian model and a portability model for code developed on or shared with big-endian Power platforms. To efficiently implement these models, we extend compiler optimizations to optimize vector intermediate representations to eliminate data reformatting primitives. In addition to describing a framework for SIMD portability and for optimizing SIMD vector refor-matting, we implement a novel vector operator optimization pass and measure its effectiveness: our implementation eliminates all data refor-matting from application vector kernels, resulting in a speedup of up 65% for a Power8 microarchitecture with two fully symmetric vector execution units. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Gschwind1_OPF2015_IBM_031315_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/join-first-openpowerchat-twitter-2.md b/content/blog/join-first-openpowerchat-twitter-2.md new file mode 100644 index 0000000..e61c877 --- /dev/null +++ b/content/blog/join-first-openpowerchat-twitter-2.md @@ -0,0 +1,29 @@ +--- +title: "Join Our first #OpenPOWERChat on Twitter" +date: "2018-02-20" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-summit" + - "openpower-foundation" + - "openpowerchat" + - "twitter-chat" + - "openpower-chat" +--- + +Hi OpenPOWER Foundation members, + +We’re excited to announce our first Twitter chat on Thursday, March 1 at 3:00 p.m. ET. + +Here are some of details you should know: + +- The chat will be hosted right on our [@OpenPOWERorg](https://twitter.com/openpowerorg?lang=en) Twitter page +- We will have two special guest hosts, Hugh Blemings, Executive Director and Robbie Williamson, Chair of the Board. +- You can join the conversation using #[OpenPOWERchat](https://twitter.com/search?f=tweets&q=%23openpowerchat&src=typd) + +The conversation will kick off at 3 PM ET, so please drop in and answer as many or as few questions as you can. + +Our chat will focus on the upcoming [OpenPOWER Summit](https://openpowerfoundation.org/summit-2018-03-us/) and the innovations and collaboration that OpenPOWER members have developed for the conference. + +[Click here](https://www.dropbox.com/s/puaeycvq94d7hos/%23OpenPOWERChat%20.ics?dl=0) to add this event to your calendar. We look forward to chatting with you on Twitter on March 1! diff --git a/content/blog/join-openpower-foundation-linux-conf-au.md b/content/blog/join-openpower-foundation-linux-conf-au.md new file mode 100644 index 0000000..895e5da --- /dev/null +++ b/content/blog/join-openpower-foundation-linux-conf-au.md @@ -0,0 +1,28 @@ +--- +title: "Join OpenPOWER Foundation at linux.conf.au" +date: "2019-01-18" +categories: + - "blogs" +tags: + - "featured" +--- + +By Hugh Blemings**,** Executive Director, OpenPOWER Foundation + +In a few hours I’ll board a flight to beautiful Christchurch, New Zealand, the home next week of this year’s [linux.conf.au](https://linux.conf.au/). I’m fortunate enough to have attended every LCA as well as [spoken at a few](https://www.youtube.com/watch?v=EKULvoKDUhc) and it remains, in my view, one of the preeminent technical open source conferences in the world. + +This year will be no different; again LCA has attracted speakers and attendees from all around the globe. The theme for 2019 is the use of free open source software and hardware with emphasis on Internet of Things, security, privacy, environment, communication, health and ethics. + +Ably assisted by some smart folk from the community, I’ll be leading an [OpenPOWER Bird of Feather session](https://2019.linux.conf.au/wiki/OpenPOWER_BoF) at the conference. Given the growing interest in open hardware for medium to high-end compute and OpenPOWER in general, I think it’s a particularly timely session. Oh and I’ll have a [Raptor Computing Systems](https://twitter.com/RaptorCompSys) [Blackbird POWER9 Motherboard](https://secure.raptorcs.com/content/BK1MB1/intro.html) for show and tell too! + +There are other sessions on the agenda that the OpenPOWER community will be interested in: + +- [Reliable Linux Kernel Crash Dump with Micro-Controller Assistance](https://linux.conf.au/schedule/presentation/235/) – an overview of the concept, design, implementation and learning, from a framework that allows for guaranteed capture of the memory state of both the crashed Linux kernel and the OPAL firmware it runs on. +- [Clang Built Linux](https://linux.conf.au/schedule/presentation/210/) – a demonstration of how to build a kernel with clang and an overview of remaining work to be done. +- [Booting faster](https://linux.conf.au/schedule/presentation/105/) – covers the efforts over the past several years into making POWER based systems boot faster. A full stack deep dive into what it takes to cold (and warm) boot (and reboot) a system. +- [Bugs in your server](https://linux.conf.au/schedule/presentation/265/) – will demonstrate methods of gaining complete, persistent control of the BMC using a variety of useful hardware features. +- [Taking it to the Nest Level – Nested KVM on the POWER9 Processor](https://linux.conf.au/schedule/presentation/145/) – delving into the rational behind developing software to support nested virtualization and the implementation details associated with it. +- [Climbing the Summit with Open Source and POWER9](https://linux.conf.au/schedule/presentation/155/) – an overview of the experience of developing Summit, the world’s fastest supercomputer, and how open source is used with POWER9. +- [Petitboot: Linux in the Bootloader](https://linux.conf.au/schedule/presentation/158/) – will cover the Petitboot bootloader – what it is, how it works, the positives and the challenges of delivering an open source bootloader, how it fits in with the current bootloader ecosystem, and where Petitboot could go in the future. + +If you’re attending linux.conf.au, let me know on [Twitter @hughhalf](https://twitter.com/hughhalf) and come and say hi! diff --git a/content/blog/join-sc16-treasure-hunt.md b/content/blog/join-sc16-treasure-hunt.md new file mode 100644 index 0000000..08fef08 --- /dev/null +++ b/content/blog/join-sc16-treasure-hunt.md @@ -0,0 +1,68 @@ +--- +title: "Join the SC16 Treasure Hunt!" +date: "2016-11-11" +categories: + - "blogs" +tags: + - "featured" +--- + +## ![openpower_treasurehunt_banner](images/OpenPower_TreasureHunt_Banner.png) + +## Calling all Treasure Hunters! + +We’re going to give you a chance to be a part of the OpenPOWER Revolution – but you’re going to have to earn it. Guided by our clues, we’ll show you the latest advancements and applications on the OpenPOWER platform. + +### Use what you discover to solve any **three** of our five clues below! + +Once we verify your answers we'll reward you for your efforts with a FREE custom designed OpenPOWER T-shirt to serve as a wearable trophy for your successful completion of our Treasure Hunt! + +## ![openpower-tshirt-full-mockup-v3c](images/OpenPOWER-Tshirt-Full-Mockup-V3c-1024x673.jpg)Here is your first clue! + +![](images/ChallengeTemplate_Clue1_v2.png) + +To solve this clue, watch this [new video showcasing how OpenPOWER members Kinetica, NVIDIA, and IBM](https://www.youtube.com/watch?v=GZAFzlWN8FU) are helping retailers analyze data faster than ever before! + +https://www.youtube.com/watch?v=GZAFzlWN8FU + +**To solve this clue, tell us how much faster Kinetica runs on the IBM-NVIDIA system in the Google Form!** + +## You're on your way! Recognize NVIDIA CEO Jen-Hsun Huang? He has the key to the next clue! + +![challengetemplate_clue2_tw](images/ChallengeTemplate_Clue2_TW.png) + +Read NVIDIA CEO Jen-Hsun Huang's blog post, the [Intelligent Industrial Revolution](https://blogs.nvidia.com/blog/2016/10/24/intelligent-industrial-revolution/), and answer the question in the Google form below! + +**According to NVIDIA CEO Jen-Hsun Huang, what is IBM's new POWER8-NVLink server designed to bring?** + +## Almost halfway there! Here's the third clue in the #HexMarksTheSpot Treasure Hunt! + +![](images/ChallengeTemplate_Clue3_v2.png) + +_Hounding_ for the solution? Visit [http://bit.ly/DogDemo](http://bit.ly/DogDemo) to try out the OpenPOWER Dog Identification Demo using GPUs! **Share a screenshot of your ID'd dog on Twitter using the hashtag #HexMarksTheSpot to complete this clue!** + +Want to learn more about how deep learning on OpenPOWER and how the demo works? Visit our blog post, [Deep Learning Goes to the Dogs](https://openpowerfoundation.org/blogs/deep-learning-goes-to-the-dogs/).You're solving this Treasure Hunt so fast you're making Captain Jack Sparrow jealous! Here's your fourth clue. + +![challengetemplate_clue4_tw](images/ChallengeTemplate_Clue4_TW.png) + +The answer also can be found in OpenPOWER advocate Sumit Gupta's blog post, "[IBM turns POWER HPC momentum up to 11](http://ibm.co/2fuFl6t)!" + +**After reading it, use the Google form to tell us how many of Intersect360's Top 10 HPC Applications are currently supported on OpenPOWER!** + +## Can you see the Hex yet?! Here is your Final Clue! + +![challengetemplate_clue5_tw](images/ChallengeTemplate_Clue5_TW.png) + +See what the possibilities are by reading about the new [PowerAI Deep Learning package](http://ibm.co/2fS8t60) here: [http://ibm.co/2fuGaMV](http://ibm.co/2fS8t60). + +**Tell us three of the deep learning distributions supported by the new package in the Google Form.** + +# That's it, you did it! Complete the Google Form below, including your shipping information. Once we verify your answers, we'll let you know your a winner and send your t-shirt! Please allow 5-10 business days for processing and shipping. + +  + + + +  + +## Share your Treasure Hunt progress with the hashtag #HexMarksTheSpot! diff --git a/content/blog/join-us-for-an-action-packed-2015-summit.md b/content/blog/join-us-for-an-action-packed-2015-summit.md new file mode 100644 index 0000000..90ef367 --- /dev/null +++ b/content/blog/join-us-for-an-action-packed-2015-summit.md @@ -0,0 +1,10 @@ +--- +title: "Join Us for an Action Packed 2015 Summit" +date: "2015-01-20" +categories: + - "blogs" +--- + +\- > 30 presentations on wide field of OpenPOWER topics - demo pavilion - isv roundtable - firmware training + +Its going to be an action packed Summit, come join us. diff --git a/content/blog/key-value-store-acceleration-with-openpower.md b/content/blog/key-value-store-acceleration-with-openpower.md new file mode 100644 index 0000000..0aeca02 --- /dev/null +++ b/content/blog/key-value-store-acceleration-with-openpower.md @@ -0,0 +1,28 @@ +--- +title: "Key-Value Store Acceleration with OpenPOWER" +date: "2015-01-17" +categories: + - "blogs" +--- + +### Objective + +- To show-case a broadly relevant data center application on FPGAs and OpenPOWER and the benefits it can bring +- To demonstrate the advantages that OpenPOWER’s shared virtual memory concept offers +- To entice partner companies to develop infrastructure and more sophisticated designs on top of our FPGA-based accelerator card + +### Abstract + +Distributed key-value stores such as memcached form a critical middleware application within today’s web infrastructure. However, typical x86-based systems yield limited performance scalability and high power consumption as their architecture with its optimization for single thread performance is not well-matched towards the memory-intensive and parallel nature of this application. In this talk, we present the architecture of an accelerated key-value store appliance that leverages a novel data-flow implementation of memcached on a Field Programmable Gate Array (FPGA) to achieve up to 36x in performance/power at response times in the microsecond range, as well as the coherent integration of memory through IBM’s OpenPOWER architecture, utilizing host memory and CAPI-attached flash as value store. This allows for economic scaling of value store density to terabytes while providing an open platform that can be augmented with additional functionality such as data analytics that can be easily partitioned between Power8 processor and FPGA. + +### Biography + +Michaela Blott graduated from the University of Kaiserslautern in Germany. She worked in both research institutions (ETH and Bell Labs) as well as development organizations and was deeply involved in large scale international collaborations such as NetFPGA-10G. Her expertise spreads high-speed networking, emerging memory technologies, data centers and distributed computing systems with a focus on FPGA-based implementations. Today, she works as a principal engineer at the Xilinx labs in Dublin heading a team of international researchers. Her key responsibility is exploring applications, system architectures and new design flows for FPGAs in data centers. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Blott-Michaela_OPFS2015_Xilinx_031615_v8_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/libre-soc-180nm-power-isa-asic-submitted-to-imec-for-fabrication.md b/content/blog/libre-soc-180nm-power-isa-asic-submitted-to-imec-for-fabrication.md new file mode 100644 index 0000000..d2e9090 --- /dev/null +++ b/content/blog/libre-soc-180nm-power-isa-asic-submitted-to-imec-for-fabrication.md @@ -0,0 +1,35 @@ +--- +title: "Libre-SOC 180nm Power ISA ASIC Submitted to Imec for Fabrication" +categories: + - "blogs" +tags: + - "power-isa" + - "libre-soc" + - "180nm-power-isa-test-asic" + - "chips4makers" + - "sorbonne-university" + - "imec" + - "tsmc" +date: "2021-07-08" +draft: false +--- + +[Libre-SOC](https://libre-soc.org/)'s 180nm Power ISA Test ASIC, developed in conjunction with [Chips4Makers](https://chips4makers.io/) and Sorbonne Université’s [LIP6](https://www.lip6.fr/?LANG=en), has been submitted to [Imec](https://www.imec-int.com/en)’s MPW Shuttle Service for fabrication in [TSMC](https://www.tsmc.com/english) 180nm. + +The team that collaborated on the project has a wealth of expertise in software engineering and ethical hardware design, and as a matter of principle used a fully free and open source toolchain to deliver this groundbreaking chip. This makes it the first ASIC of its kind, with many more to come - each edging closer to an attractive open hardware alternative to current proprietary offerings. The project was funded by [NLnet Foundation](https://nlnet.nl/) as part of its Next Generation Internet initiative, as a fundamental technological building block that will help increase privacy and trustworthiness for end users. + +Implementing a fixed-point subset of the v3.0B OpenPOWER ISA, Libre-SOC’s 180nm Power ISA Test ASIC is the world's first Power ISA implementation designed outside of IBM to go to silicon, following [IBM’s open sourcing of the POWER ISA in 2019](https://newsroom.ibm.com/2019-08-21-IBM-Demonstrates-Commitment-to-Open-Hardware-Movement). Libre-SOC used Microwatt, which was designed by IBM and [sent to Skywater for fabrication earlier this year](https://openpowerfoundation.org/openpower-foundation-provides-microwatt-for-fabrication-on-skywater-open-pdk-shuttle/), as a reference design for benchmarking and cross-verification. + +\[caption id="attachment\_7838" align="aligncenter" width="500"\]![Snapshot of the 180nm GDS-II file laid out automatically with coriolis2](images/Libre-SOC-ASIC-1024x1024.png) Snapshot of the 180nm GDS-II file laid out automatically with coriolis2\[/caption\] + +The ASIC is 130,000 gates, measures 5.5 x 5.9 mm^2, contains four 4k SRAMs developed by Chips4Makers, and a 300 mhz Voltage-Controlled PLL developed by [Professor Galayko](https://www.lip6.fr/actualite/personnes-fiche.php?ident=P230) of Sorbonne Université. The VLSI tape-out was carried out by [Jean-Paul Chaput](https://lip6.fr/Jean-Paul.Chaput) of Sorbonne Université using coriolis2, and the Static Timing Analysis and LVS checking by [Dr. Marie-Minerve Louërat](https://www-soc.lip6.fr/users/marie-minerve-louerat/) of Sorbonne Université. The HDL of the core is entirely in nmigen, a python Object-Orientated HDL. + +The Cell Library used, FlexLib, also sponsored by NLnet, was developed by [Staf Verhaegen](https://www.linkedin.com/in/staf-verhaegen-b3316b/?originalSubdomain=be) of Chips4Makers, and is Libre-Licensed. Symbolic (ghost) versions of FlexLib allowed Libre-SOC developers to not have to sign a Foundry NDA during the development of the ASIC Layout: an important requirement to fulfil their transparency obligations to NLnet under the Privacy and Enhanced Trust Programme. + +LIP6 developed the VLSI ASIC Layout tool, coriolis2. Coriolis2 is also entirely Libre-licensed and is a fully automated HDL to GDS-II tool which requires no manual intervention. It is independent of OpenLANE, is developed entirely in Europe, and has the same fully automated capability of turning HDL into 100% DRC clean GDS-II. + +LIP6 were able to create the GDS-II tape-out under NDA using "Real" (non-symbolic) versions of Chips4Makers’ FlexLib, whilst Libre-SOC developers assisted using Symbolic Cells. + +“We developed this ASIC on the Power architecture because of its supercomputing pedigree, and the decades-long commitment and stability that IBM and other OpenPOWER Foundation members have sustained,” said [Luke Kenneth Casson Leighton](https://libre-soc.org/lkcl/), lead developer and project coordinator for Libre-SOC. “On this strong base, we can build a reliable, efficient Hybrid 3D CPU-VPU-GPU, and our next test ASIC will include Draft Cray-style Vector Extensions, SVP64.” + +For more information, contact the developers of Libre-SOC at [http://libre-soc.org](http://libre-soc.org/). diff --git a/content/blog/life-at-the-intersection-openpower-open-compute-and-the-future-of-cloud-software-infrastructure.md b/content/blog/life-at-the-intersection-openpower-open-compute-and-the-future-of-cloud-software-infrastructure.md new file mode 100644 index 0000000..44a9918 --- /dev/null +++ b/content/blog/life-at-the-intersection-openpower-open-compute-and-the-future-of-cloud-software-infrastructure.md @@ -0,0 +1,38 @@ +--- +title: "Life at the Intersection: OpenPOWER, Open Compute, and the Future of Cloud Software & Infrastructure" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Objectives + +1. Provide Rackspace’s a point of view about what “the Cloud” needs from OpenPOWER, OCP, and developers in major software initiatives (Open Stack, Linux, Hypervisors, etc). +2. Share observations about working cross functionally amongst development communities, especially ones that develop as-a-Service platforms. How best to engage?  Common mistakes.  Success stories.  What’s the give and take? +3. Share what Rackspace (as a case study) plans to achieve now, and over the next few years, with OpenPOWER and Open Compute. + +### Abstract + +Open hardware has the potential to disrupt the datacenter and the world of software development in very positive ways.  OpenPOWER takes that potential a few steps further, both in the core system, and with technologies like CAPI.  These innovations raise the possibility of performance and efficiency improvements to a magnitude not seen for a long time. + +The potential is there, but how do we drive adoption?  From platform developers?  From software developers?  From communities like OpenStack?  From service providers?  From end users?  And if we’re going to do it in the Open, that brings both big opportunities, and big challenges.  How do we manage that? + +This talk will explore past experience and current impressions of someone who has done development work at the intersection of OpenStack and Open Compute for a few years.  It will cover his experience working with teams building & integrating hardware and software, for large scale as-a-Service deployments of OpenStack Nova and Ironic on Open Compute hardware. + +It will also cover his take on the state of open hardware and software development today, and future frontiers.  He'll present his thoughts and experiences getting as-a-Service developers to move further down the hardware stack, enabling the use of OpenPOWER features and technologies for the masses. + +### Bio + +[Aaron Sullivan](https://www.linkedin.com/profile/view?id=12025780&authType=NAME_SEARCH&authToken=dLV3&locale=en_US&srchid=32272301421438774069&srchindex=6&srchtotal=95&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438774069%2CVSRPtargetId%3A12025780%2CVSRPcmpt%3Aprimary) is a Senior Director and Distinguished Engineer at Rackspace, focused on infrastructure strategy. Aaron joined Rackspace's Product Development organization in late 2008, in an engineering role, focused on servers, storage, and operating systems. He moved to Rackspace’s Supply Chain/Business Operations organization in 2010, mostly focused on next generation storage and datacenters. He became a Principal Engineer during 2011 and a Director in 2012, supporting a variety of initiatives, including the development and launch of Rackspace’s first Open Compute platforms. He became a Senior Director and Distinguished Engineer in 2014. + +These days, he spends most of his time working on next generation server technology, designing infrastructure for Rackspace’s Product and Practice Areas, and supporting the growth and capabilities of Rackspace’s Global Infrastructure Engineering team. He also frequently represents Rackspace as a public speaker, writer, and commentator. He was involved with the Open Compute Project (OCP) since its start at Rackspace. He became formally involved in late 2012. He is Rackspace’s lead for OCP initiatives and platform designs. Aaron is serving his second term as an OCP Incubation Committee member, and sponsors the Certification & Interoperability (C&I) project workgroup. He supported the C&I workgroup as they built and submitted their first test specifications. He has also spent time working with the OCP Foundation on licensing and other strategic initiatives. + +Aaron previously spent time at GE, SBC, and AT&T. Over the last 17 years, he’s touched more technology than he cares to talk about. When he’s not working, he enjoys reading science and history, spending time with his wife and children, and a little solitude. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Sullivan-Aaron_OPFS2015_Rackspace_031315_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/linking-up-cpta-and-opf-global-members-to-help-global-opf-member-to-use-cpta-as-a-stepping-stone-to-go-into-china-market.md b/content/blog/linking-up-cpta-and-opf-global-members-to-help-global-opf-member-to-use-cpta-as-a-stepping-stone-to-go-into-china-market.md new file mode 100644 index 0000000..99546df --- /dev/null +++ b/content/blog/linking-up-cpta-and-opf-global-members-to-help-global-opf-member-to-use-cpta-as-a-stepping-stone-to-go-into-china-market.md @@ -0,0 +1,26 @@ +--- +title: "Linking up CPTA and OPF global members, to help global OPF member to use CPTA as a stepping stone to go into China market" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Speaker + +Mr. Zhu Ya Dong, Chairman of PowerCore, China, Platinum Member of OpenPOWER Foundation + +### Objective + +The objective is to position China POWER Technology Alliance (CPTA) as a mechanism to help global OpenPOWER Foundation members engage with China organizations on POWER-based implementations in China. + +### Abstract + +OpenPOWER ecosystem has grown fast in China Market with 12 OPF members growth in 2014. China POWER Technology Alliance was established in Oct. 2014, led by China Ministry of Industry and Information Technology (MIIT), in order to accelerate the speed of China secured and trusted IT industry chain building, by leveraging OpenPOWER Technology. This presentation is for the purpose of linking up CPTA and OPF global members, to help global OPF member to use CPTA as a stepping stone to go into China market. This presentation will focus on explaining to the global OPF members WHY they should come to China, and above all, HOW to come to China, and WHAT support services CPTA will provide to the global OPF members. It’ll also create a clarity between CPTA and OPF in China, for OPF members to leverage CPTA as a (non-mandatory) on-ramp to China. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Zhu-Ya-Dong_OPFS2015_Powercore_011315.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/linux-conf-au-2020-openisa-miniconf-explored-openpower-and-risc-v-possibilities.md b/content/blog/linux-conf-au-2020-openisa-miniconf-explored-openpower-and-risc-v-possibilities.md new file mode 100644 index 0000000..1992754 --- /dev/null +++ b/content/blog/linux-conf-au-2020-openisa-miniconf-explored-openpower-and-risc-v-possibilities.md @@ -0,0 +1,42 @@ +--- +title: "Linux.conf.au 2020: OpenISA miniconf explored OpenPOWER and RISC-V Possibilities" +date: "2020-02-03" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "hugh-blemings" + - "linux-conf-au" + - "risc-v" + - "allstair-francis" + - "openisa-miniconf" +--- + +By: [Hugh Blemings](https://www.linkedin.com/in/hugh-blemings/detail/recent-activity/), Executive Director, OpenPOWER Foundation + +\[caption id="attachment\_7341" align="alignnone" width="700"\]![](images/OpenISA-1024x576.jpg) Hugh Blemings and Allstair Francis at the OpenISA miniconf, held at linux.conf.au in January, 2020.\[/caption\] + +In mid-January, I made what has become an annual pilgrimage to [linux.conf.au](https://linux.conf.au/) - the Linux/Open Source conference of choice for antipodeans, not to mention a sizeable contingent of presenters and attendees from places further afield. + +Over the years I’ve had the good fortune to be involved in many capacities at LCA, and 2020 was no different. I not only presented a session but also co-organised a day-long “miniconf” on a subject close to my heart. + +[Alistair Francis](https://www.linkedin.com/in/alistair23/) (part of the great RISC-V crew at [Western Digital](https://www.westerndigital.com)) and I ran what we believe was one of the first OpenISA miniconferences that had sessions covering both RISC-V and OpenPOWER, along with sessions on general ISA related topics. I’ll come back to these in a moment but first wanted to give a bit of a tour of the OpenPOWER-related ones. + +First up was the session "[POWER OpenISA and Microwatt introduction](https://www.youtube.com/watch?v=DFGK8rdWWvs)" by Michael Neuling. Unfortunately, Mikey got called away on business at the last minute, so [Anton Blanchard](https://www.linkedin.com/in/antonblanchard/), a fellow IBMer, stepped in. The session gives a great overview of both the now-opened ISA as well as Microwatt, a FPGA softcore implementation. + +Anton’s originally scheduled talk "[Build your own Open Hardware CPU in 25 minutes or less](https://www.youtube.com/watch?v=g3slH03MCmo)" was up next. It gave a bit more context around the Microwatt simulation and how easy it is to add instructions to the implementation. + +Last but by no means least of the OpenPOWER-specific sessions in the miniconf was [Paul Mackerras](https://github.com/paulusmack)’ deep dive ["Microwatt Microarchitecture"](https://www.youtube.com/watch?v=JkDx_y0onSk) in which he gave a detailed tour through the implementation of Microwatt and some of the architectural decisions and optimizations that have already been made. + +As I mentioned earlier, aside from the OpenPOWER specific talks, Alistair and I co-presented an [intro session](https://www.youtube.com/watch?v=1NM_ZNlFMKQ&feature=youtu.be) that gave a snapshot of both ecosystems and how they fit together in early 2020. Beyond this, there were several talks that covered both RISC-V and other general Open ISA topics. All are worth a look but my own favourites are probably either [Keith Packard’](https://www.linkedin.com/in/keithrpackard/)s session on [“picolibc: a C library for smaller systems”](https://www.youtube.com/watch?v=SC6aBezNFFQ) or [Sean "xobs" Cross](https://twitter.com/xobs?lang=en) on ["Paying it Forward: Documenting your Open Hardware Module."](https://www.youtube.com/watch?v=LumvbPLtgxw) I’ve listed all the miniconf sessions for reference at the end of this post. + +Later in the main conference program, I presented a session ["Open AND high-performance Computing"](https://www.youtube.com/watch?v=poUGzQXHTak&t=1s) which pointed out that as an industry we need to have computing hardware that is both open and provides high performance - and that is just what OpenPOWER provides. I then gave an update on the OpenISA before a very enjoyable and thought-provoking Q&A from the audience. + +Linux.conf.au is always a great week and I recommend the many other sessions and keynotes. They’re all available through the linux.conf.au [YouTube channel](https://www.youtube.com/channel/UCciKHCG06rnq31toLTfAiyw). + +_PS - Make sure to follow along for more updates from industry events. For example, Anton Blanchard had a great session at the Chisel Community Conference last week - keep an eye out for an upcoming blog post on it!_ + +  + +
Session NameSpeaker(s)
OpenISA Miniconf IntroAlistair Francis & Hugh Blemings
RISC-V software ecosystem in 2020Atish Patra
RISC-V FDPIC/NOMMU toolchain/runtime supportMaciej W. Rozycki
RISC-V 32-bit glibc portAlistair Francis
Co-developing RISC-V Hypervisor SupportAnup Patel
POWER OpenISA and Microwatt introductionMikey Neuling (Anton Blanchard presenting)
Build your own Open Hardware CPU in 25 minutes or lessAnton Blanchard
Microwatt MicroarchitecturePaul Mackerras
Paying it Forward: Documenting your Open Hardware ModuleSean “xobs” Cross
picolibc: a C library for smaller systemsKeith Packard
Universal Tools for Acceleration, Timing, Integration & Machine Enhancement Hasjim Williams
diff --git a/content/blog/liquid-cooling-for-openpower-asetek-accelerates-the-performance-of-openpower-platforms.md b/content/blog/liquid-cooling-for-openpower-asetek-accelerates-the-performance-of-openpower-platforms.md new file mode 100644 index 0000000..08a8d80 --- /dev/null +++ b/content/blog/liquid-cooling-for-openpower-asetek-accelerates-the-performance-of-openpower-platforms.md @@ -0,0 +1,37 @@ +--- +title: "Liquid Cooling for OpenPOWER: Asetek Accelerates the Performance of OpenPOWER Platforms" +date: "2015-08-24" +categories: + - "blogs" +tags: + - "openpower" + - "featured" + - "ecosystem" + - "asetek" + - "cooling" + - "liquid-cooling" +--- + +_By Larry Vertal, Data Center Marketing, Asetek_ + +[![Asetek Liquid Cooler for POWER8](images/APLC-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/08/APLC.jpg)[Asetek](http://asetek.com/ "Asetek Liquid Cooling")® joined the OpenPOWER™ Foundation in July of this year with great enthusiasm. As the world’s leading provider of liquid cooling systems for CPU and GPUs, with over 2 million units sold, Asetek knows it can bring a lot to OpenPOWER designs and enable the community to productize the highest performance systems and clusters leveraging liquid cooling for OpenPOWER. + +Asetek is already engaged in delivering liquid cooling designs that accelerate the performance of OpenPOWER platforms. At the 2015 International Supercomputing Conference (ISC15) in Frankfurt, Germany, in July of this year, Asetek provided the first public showing of a liquid cooling system for POWER8 processors. Particularly interesting about this design is that it enables POWER8 server nodes to utilize the highest performing overclocked Power processors without concerns for throttling. + + + +Given Asetek’s history in enabling [Top500 HPC sites](http://asetek.com/data-center/data-center-installations/), the current cutting edge performance and expected enhancements to POWER processors will likely demonstrate a need for liquid cooling to provide non-throttling clusters with extreme rack densities. + +OpenPOWER member innovations – including custom systems for large-scale data centers, workload acceleration through GPUs and advanced hardware technology exploitation – can all benefit from having a fellow member with proven leadership in CPU, GPU and overall data center liquid cooling. Asetek looks forward to being able to enable members to push their design imaginations, knowing that they have liquid cooling as a tool. + +Asetek’s approach to liquid cooling is extremely flexible in adapting to different server designs and board layouts. The proven approach brings hot water for cooling directly to the high heat flux components within servers such as CPUs, GPUs and memory. Since CPUs run quite hot (153°F to 185°F) and hotter still for memory and GPUs, there is no need for expensive chillers to cool the water returning from the servers. In addition, the cooling efficiency of water (4000x that of air) allows Asetek’s RackCDU Direct-to-Chip™ (D2C) to cool with hot water. Hot water cooling allows the use of dry coolers rather than chillers. Enabling extreme Kilowatt density racks also reduces the power required for server fans. RackCDU D2C uses a distributed pumping model. The cooling plate/pump replaces the air heat sink on the CPUs or GPUs in the server. Each pump/cold plate has sufficient pumping power to cool the whole server, providing redundancy in a two or more CPU server node. Unlike centralized pumping systems which require high pressures, the pressure needed to operate the system is very low, making it an inherently more reliable system for OpenPOWER. + + + +Asetek looks forward to continuing its involvement in the OpenPOWER Foundation, and working with fellow members to provide sustained throughput and the best performance for high-density overclocked systems. + +Follow Asetek on Twitter: [https://twitter.com/asetek](https://twitter.com/asetek) Like Asetek on Facebook: [https://www.facebook.com/Asetek](https://www.facebook.com/Asetek) Follow Asetek on LinkedIn: [https://www.linkedin.com/company/asetek-inc](https://www.linkedin.com/company/asetek-inc.) + +* * * + +_About Larry Vertal [![Larry Vertal, Asetek](images/BW-Larry-press-causual-small-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/08/BW-Larry-press-causual-small.jpg)Larry Vertal is tasked with data center marketing for Asetek.  Larry was Founding Director of The Green Grid and later Executive Director. He was previously Senior Strategist, Corporate Brand for Advance Micro Devices and Director of Enterprise Marketing for AMD.   Earlier, he was VP of Marketing for Conita Technologies and held a variety of positions in AT&T and NCR including Director, Strategic Relations and Director of Product Marketing.  Larry was responsible for the multiprocessor systems business at both AST Research and MAI/Basic Four Systems.  He was a founder of a number of technology start-ups in software and services.  He has provided clients and employers, from start-ups to Fortune 100 corporations, executive management, advisory and corporate development services._ diff --git a/content/blog/machine-learning-openpower-developer-congress.md b/content/blog/machine-learning-openpower-developer-congress.md new file mode 100644 index 0000000..4ece0d0 --- /dev/null +++ b/content/blog/machine-learning-openpower-developer-congress.md @@ -0,0 +1,58 @@ +--- +title: "Hacking Through Machine Learning at the OpenPOWER Developer Congress" +date: "2017-05-02" +categories: + - "blogs" +tags: + - "openpower" + - "featured" + - "machine-learning" + - "openpower-machine-learning-work-group" + - "developers" + - "developer-congress" + - "openpower-developer-congress" +--- + +By Sumit Gupta, Vice President, IBM Cognitive Systems + +10 years ago, every CEO leaned over to his or her CIO and CTO and said, “we got to figure out big data.” Five years ago, they leaned over and said, “we got to figure out cloud.”  This year, every CEO is asking their team to figure out “AI” or artificial intelligence. + +IBM laid out an accelerated computing future several years ago as part of our OpenPOWER initiative. This accelerated computing architecture has now become the foundation of modern AI and machine learning workloads such as deep learning. Deep learning is so compute intensive that despite using several GPUs in a single server, one computation run of deep learning software can take days, if not weeks, to run. + +The OpenPOWER architecture thrives on this kind of compute intensity. The POWER processor has much higher compute density than x86 CPUs (there are up to 192 virtual cores per CPU socket in Power8). This density per core, combined with high-speed accelerator interfaces like NVLINK and CAPI that optimize GPU pairing, provides an exponential performance benefit. And the broad OpenPOWER Linux ecosystem, with 300+ members, means that you can run these high-performance POWER-based systems in your existing data center either on-prem or from your favorite POWER cloud provider at costs that are comparable to legacy x86 architectures. + +**Take a Hack at the Machine Learning Work Group** + +The recently formed OpenPOWER Machine Learning Work Group gathers experts in the field to focus on the challenges that machine learning developers are continuously facing. Participants identify use cases, define requirements, and collaborate on solution architecture optimizations. By gathering in a workgroup with a laser focus, people from diverse organizations can better understand and engineer solutions that address similar needs and pain points. + +The OpenPOWER Foundation pursues technical solutions using POWER architecture from a variety of member-run work groups. The Machine Learning Work Group is a great example of how hardware and software can be leveraged and optimized across solutions that span the OpenPOWER ecosystem. + +**Accelerate Your Machine Learning Solution at the Developer Congress** + +This spring, the OpenPOWER Foundation will host the [OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/), a “get your hands dirty” event on May 22-25 in San Francisco. This unique event provides developers the opportunity to create and advance OpenPOWER-based solutions by taking advantage of on-site mentoring, learning from peers, and networking with developers, technical experts, and industry thought leaders. If you are a developer working on Machine Learning solutions that employ the POWER architecture, this event is for you. + +The Congress is focused full stack solutions — software, firmware, hardware infrastructure, and tooling. It’s a hands-on opportunity to ideate, learn, and develop solutions in a collaborative and supportive environment. At the end of the Congress, you will have a significant head start on developing new solutions that utilize OpenPOWER technologies and incorporate OpenPOWER Ready products. + +There has never been another event like this one. It’s a developer conference devoted to developing, not sitting through slideware presentations or sales pitches. Industry experts from the top companies that are innovating in deep learning, machine learning, and artificial intelligence will be on hand for networking, mentoring, and providing advice. + +**A Developer Congress Agenda Specific to Machine Learning** + +The OpenPOWER Developer Congress agenda addresses a variety of Machine Learning topics. For example, you can participate in hands-on VisionBrain training, learning a new model and generating the API for image classification, using your own family pictures to train the model. The current agenda includes: + +- VisionBrain: Deep Learning Development Platform for Computer Vision +- GPU Programming Training, including OpenACC and CUDA +- Inference System for Deep Learning +- Intro to Machine Learning / Deep Learning +- Develop / Port / Optimize on Power Systems and GPUs +- Advanced Optimization +- Spark on Power for Data Science +- Openstack and Database as a Service +- OpenBMC + +**Bring Your Laptop and Your Best Ideas** + +[The OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/) will take place May 22-25 in San Francisco. The event will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. So bring your laptop and preferred development tools and prepare to get your hands dirty! + +**About the author** + +[![](images/IBM.png)](https://openpowerfoundation.org/wp-content/uploads/2017/05/IBM.png)Sumit Gupta is Vice President, IBM Cognitive Systems, where he leads the product and business strategy for HPC, AI, and Analytics. Sumit joined IBM two years ago from NVIDIA, where he led the GPU accelerator business. diff --git a/content/blog/making-power-open-to-the-enterprising-masses.md b/content/blog/making-power-open-to-the-enterprising-masses.md new file mode 100644 index 0000000..53bac32 --- /dev/null +++ b/content/blog/making-power-open-to-the-enterprising-masses.md @@ -0,0 +1,8 @@ +--- +title: "Making Power Open to the Enterprising Masses" +date: "2014-05-15" +categories: + - "blogs" +--- + +Since its development in the 1990s, IBM Power Systems served databases. They crunched big data for big business better than anyone else in the industry. But so these systems would support the boom of mobile and cloud computing – not to mention social media and its unstructured data ilk – IBM decided to open Power 8 technology up to the world via the OpenPOWER Foundation. diff --git a/content/blog/making-unforgettable-mram-memory-openpower.md b/content/blog/making-unforgettable-mram-memory-openpower.md new file mode 100644 index 0000000..37e7e72 --- /dev/null +++ b/content/blog/making-unforgettable-mram-memory-openpower.md @@ -0,0 +1,49 @@ +--- +title: "Making Unforgettable MRAM Memory with OpenPOWER" +date: "2016-10-25" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Adam McPadden, Lead Engineer, Burlington Systems Lab, IBM_ + +One of the key tenets of the OpenPOWER Foundation’s collaborative model is that having open systems and published interfaces allows people to create innovative architectures at all different areas of the system, including ones where there hasn't been much change in decades like memory. + +In validation of this approach, OpenPOWER members IBM, and Everspin have demonstrated a new way for OpenPOWER members to improve application performance with STT-MRAM on the memory bus of a POWER8 server. + +STT-MRAM is included within a broad memory classification commonly referred to as Storage Class Memory (SCM) whose performance attributes lie between traditional main memory DRAM and FLASH Storage while offering the benefit of non-volatility, retaining their data without power. Typically, applications cannot process data until it is loaded into memory from storage, causing a performance bottleneck.  With SCM, this is not necessary, the data always stays in memory resulting in much faster application performance. Various types of SCM offer benefits over traditional memory.  STT-MRAM offers non-volatility at DRAM-like speeds with endurance 10^6 better than NAND FLASH, while PCM and ReRAM offer higher capacity than DRAM and faster speeds than FLASH. + +SCM technologies, such as PCM, ReRAM and STT-MRAM, have been around for many years with the promise of faster system performance achievable by having non-volatility on the memory bus.  Unfortunately, due to scaling challenges and complex materials, scalable production volume SCM  has been slow to develop.. + +IBM, long realizing the performance potential of systems with SCM, dedicated teams of engineers and scientists from IBM Research and the Systems Development Lab to enable these new memory technologies in the POWER system architecture over the past two years, opening up a new opportunity for the OpenPOWER community to innovate with production level SCM technology as a viable media leveraging attach points such as CAPI, OpenCAPI and NVMe. SCM technologies will now allow OpenPOWER Foundation members the ability to combine high performance media with low latency and high bandwidth interfaces on the POWER architecture to achieve performance benefits beyond traditional FLASH. + +"New advanced memory technologies will have a disruptive impact on the industry.  This demonstration of MRAM in a POWER8 server running real applications is a great example of what OpenPOWER is all about - creating opportunities for industry partners to innovate and enabling choice in the market," explains Steve Fields, IBM Fellow and Chief Engineer of POWER systems. + +\[caption id="attachment\_4248" align="aligncenter" width="261"\]![Figure 1: IBM's Con Tutto Platform](images/con-tutto-1-261x300.png) Figure 1: IBM's Con Tutto Platform\[/caption\] + +## Driving Memory Performance with Con Tutto + +Enabling new memory technologies required IBM and its partners to develop a prototyping platform which would allow non-DRAM technologies to run at full bus speeds in their POWER8 server. This platform, named Con Tutto, combines FPGA flexibility with at-speed memory bus compatibility. The Con Tutto card allows POWER8 users to develop the software stack necessary for persistent memory support and better understand the system level characteristics associated with various SCM technologies today. + +\[caption id="attachment\_4249" align="aligncenter" width="625"\]![Figure 2: Storage Class Memory Latency](images/con-tutto-2-1024x575.png) Figure 2: Storage Class Memory Latency\[/caption\] + +High performance technologies such as STT-MRAM on the system memory bus offer a low latency attach point for applications to leverage persistent memory with direct access (DAX) from the application.  The performance value of SCM in a server depends heavily on the technology and implementation specifics.  Leveraging the Con Tutto card with STT-MRAM, in-system test results show up to 97% lower latency and 20X higher bandwidth when compared to a current generation FLASH NVMe card, and we are working to make this even faster. + +## Accelerating Applications with Unforgettable Memory + +IBM has partnered with Everspin Technologies to demonstrate their first production level pMTJ (Perpendicular Magnetic Tunnel Junction) STT-MRAM chips in a high performance S824L server seen in Figure 3, leveraging the lower power, higher performance offered by this architecture. + +\[caption id="attachment\_4251" align="aligncenter" width="300"\]![Figure 3: IBM S824L Server running STT_MRAM on the Memory Bus](images/Con-Tutto-4-300x222.png) Figure 3: IBM S824L Server running STT\_MRAM on the Memory Bus\[/caption\] + +While this STT-MRAM solution is in production, its capacity to date has limited broad usage to applications which need the benefits of non-volatility, high performance but do not need high capacity (write caching, journaling, etc).  The announcement of a 1Gb chip by Everspin will improve the viability for broader use cases.  SCM technologies such as ReRAM, PCM and others will expand the application value proposition of persistent memory as they become mature. + +## Learn More at OpenPOWER Summit Europe in Barcelona + +IBM and Everspin will be showcasing this new solution in an application demo at the OpenPOWER Summit Europe, building on a previous demo shown at the 2016 OpenPOWER Summit in San Jose, CA, where IBM engineers and scientists were the first to demonstrate production level STT-MRAM on the memory bus of a POWER8 server using IBM’s DMI (Differential Memory Interface) bus.  In the demo, you’ll see the performance benefits of combining a high performance SCM and a low latency bus on key business applications. + +You can also learn more about Con Tutto by visiting these links on the OpenPOWER Foundation: + +- **[https://openpowerfoundation.org/presentations/contutto/](https://openpowerfoundation.org/presentations/contutto/)** +- [**https://openpowerfoundation.org/presentations/programmable-near-memory-acceleration-on-contutto/**](https://openpowerfoundation.org/presentations/programmable-near-memory-acceleration-on-contutto/) diff --git a/content/blog/meet-new-openpower-chair.md b/content/blog/meet-new-openpower-chair.md new file mode 100644 index 0000000..ab7e8be --- /dev/null +++ b/content/blog/meet-new-openpower-chair.md @@ -0,0 +1,46 @@ +--- +title: "Meet the New Chair of the OpenPOWER Foundation Board of Directors" +date: "2017-10-30" +categories: + - "blogs" +tags: + - "openpower" + - "machine-learning" + - "openpower-foundation" + - "artificial-intelligence" + - "robbie-williamson" + - "power-systems" + - "ai" + - "software" + - "hardware" + - "canonical" + - "ubunto" +--- + +_By Robbie Williamson, Chair of the Board of Directors, OpenPOWER Foundation._ + +\[caption id="attachment\_5056" align="alignleft" width="150"\][![Robbie Williamson, Chair, Board of Directors, OpenPOWER Foundation](images/03c6602-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2017/10/03c6602.jpg) "In the coming months, I plan to further open up access to OpenPOWER and lower the bar to entry." - Robbie Williamson, Chair, Board of Directors, OpenPOWER Foundation\[/caption\] + +The OpenPOWER Foundation is an innovative, collaborative and crucially important organization in the world of high-powered computing, and I’m fortunate and excited for the opportunity to lead its board of directors. + +First, let me share some of my background. I’ve been a “computer geek” since my first BASIC program on my Atari 800XL, and I’ve been working with opensource-based technologies since 2001. Today, I work for Canonical and help develop Ubuntu. I have experience across both hardware and software on POWER. I’ve worked on the POWER-based CellBE processor project, the first processor used on the Sony PlayStation 3. In addition, I played an instrumental role in getting the first port of Ubuntu working on POWER. + +My primary goal in my position as chair of the OpenPOWER board of directors is to showcase the potential and performance that you can get from the OpenPOWER platform. I want to see Power succeed in providing another infrastructure to developers. There’s so much opportunity for improved performance with POWER. + +## **Software and Hardware** + +If you only have hardware at your fingertips, you’re missing out on the full stack of options. + +As chairperson, I hope to bring more of a software focus to the organization. There is a vibrant and thriving community at the ecosystem level in software, and I want to build that up and support developers in the software space to attract and grow the software side of POWER, while continuing to support hardware. + +## **On AI and Machine Learning** + +POWER is currently advancing through innovations in Artificial Intelligence and Machine Learning. Utilizing these technologies means getting more bang for your buck. Space in data centers is limited, but AI and ML allow organizations to get more power from the same amount of space. For instance, four POWER machines can generate the same results as 10-12 traditional units. I plan to harness this potential and help POWER grow further in these areas. + +## **The POWER Ecosystem** + +In the coming months, I plan to further open up access to OpenPOWER and lower the bar to entry. I want to make developers excited about a POWER machine, not afraid of it. + +Once you realize what you can get out of power, it’s pretty bad ass. + +I’d love to talk with you about POWER systems and the OpenPOWER Foundation. [Connect with me on LinkedIn here!](https://www.linkedin.com/in/williamsonrobbie/) diff --git a/content/blog/mellanox-and-ibm-collaborate-to-provide-leading-data-center-solution-infrastructures.md b/content/blog/mellanox-and-ibm-collaborate-to-provide-leading-data-center-solution-infrastructures.md new file mode 100644 index 0000000..918ef18 --- /dev/null +++ b/content/blog/mellanox-and-ibm-collaborate-to-provide-leading-data-center-solution-infrastructures.md @@ -0,0 +1,9 @@ +--- +title: "Mellanox and IBM Collaborate to Provide Leading Data Center Solution Infrastructures" +date: "2014-04-23" +categories: + - "press-releases" + - "blogs" +--- + +Mellanox recently announced a collaboration with IBM to produce a tightly integrated server and storage solutions that incorporate our end-to-end FDR 56Gb/s InfiniBand and 10/40 Gigabit Ethernet interconnect solutions with IBM POWER CPUs.  By combining IBM POWER CPUs with the world’s highest-performance interconnect solution will drive data at optimal rates, maximizing performance and efficiency for all types of applications and workloads, as well as enable dynamic storage solutions to allow multiple applications to efficiently share data repositories. diff --git a/content/blog/mellanox-and-openpower-partners-sponsor-innov8-with-power8-academic-challenge.md b/content/blog/mellanox-and-openpower-partners-sponsor-innov8-with-power8-academic-challenge.md new file mode 100644 index 0000000..f3285b0 --- /dev/null +++ b/content/blog/mellanox-and-openpower-partners-sponsor-innov8-with-power8-academic-challenge.md @@ -0,0 +1,29 @@ +--- +title: "Mellanox and OpenPOWER Partners Sponsor “Innov8 with POWER8” Academic Challenge" +date: "2014-10-28" +categories: + - "blogs" +--- + +By Scot Schultz, Director of HPC and Technical Computing, Mellanox Technologies + +Mellanox has partnered with IBM and a group of fellow OpenPOWER Foundation member companies, including NVIDIA, Altera, to launch a brand new academic challenge for computer science graduate students.  Called “Innov8 with POWER8,” the program involves three top universities – including North Carolina State University, Rice University and Oregon State University.  This fall semester, each school was provided with OpenPOWER compatible IBM POWER8 Power Systems enabled with Mellanox’s industry leading interconnect. The goal of the challenge is to enable the students to leverage the OpenPOWER server platform to drive innovation on variety of specialized projects, each focused on the themes of either Big Data, genomics or cloud computing. + +Earlier this year, at the IBM Impact 2014 conference, Mellanox demonstrated a 10x improvement in throughput and latency of a key value store application on POWER8 architecture. Mellanox Host Channel Adapters provide the highest performing interconnect solution for enterprise data centers, HPC and cloud computing and are also capable of remote direct memory access (RDMA). RDMA  allows direct memory access from remote systems with the involvement of the operating system or other CPU resources, coupled with the innovative OpenPOWER compatible POWER8 architecture, making it the perfect platform for the universities to accelerate  research and development for real-world challenges. The projects are already in development, but the initial scope of project work looks exciting. Take a look below at what the universities will be working on this semester: + +**North Carolina State University** NCSU’s projects address real-world bottlenecks in deploying big data solutions. NCSU has built up a strong set of skills in Big Data, having worked closely with the IBM Power Systems team to push the boundaries in delivering what clients need.  These projects extend their work to the next level, taking advantage of the accelerators that are a core element of the POWER8 value proposition. + +- Project 1: NCSU will focus on Big Data optimization, accelerating the preprocessing phase of their Big Data pipeline with Power-optimized,  coherently attached reconfigurable accelerators in FPGAs from Altera. The team will assess the work from the IBM Zurich Research Laboratory on text analytics acceleration, aiming to eventually develop their own accelerators. · Project 2: The University’s second project focuses on smart storage. The team is looking to leverage the Zurich accelerator in the storage context as well. + +**Rice University** Rice University has recognized that genomics information consumes massive datasets; however, developing the infrastructure required to rapidly ingest, perform analytics, and store this information is a challenge. Rice’s initiatives, in collaboration with NVIDIA and Mellanox, are designed to accelerate the adoption of these new big data and analytics technologies in medical research and clinical practice. + +- Project 1: Rice students will exploit the massive parallelism of GPU accelerator technology and linear programming algorithms to provide deeper understanding of basic organism biology, genetic variation and pathology, and adopting a multi-GPU implementation of the simplex algorithm to genome assembly and benchmarking. +- Project 2: Students will develop new approaches to high-throughput systematic identification of chromatin loops between genomic regulatory elements, utilizing GPUs to in-parallel and efficiently search the space of possible chromatin interactions for true chromatin loops. + +**Oregon State University** Oregon State University’s Open Source Lab has been a leader in open source cloud solutions on Power Systems, even providing a cloud solution hosting for more than 160 projects. These new projects create strong Infrastructure-as-a-Service (IaaS) offerings, leveraging the network strengths of Mellanox, as well as improving the management of the cloud solutions via a partnership with Chef. + +- Project 1: Oregon State University will focus on cloud enablement, working to create an OpenPOWER stack environment to demonstrate Mellanox networking and cloud capabilities. · Project 2: The University will take an open technology approach to cloud, using Linux, OpenStack and KVM to create a platform environment managed by Chef in the university’s Open Source Lab. + +As you can see, the work that is underway is impressive. Mellanox salutes each of the students involved and we look forward to hearing about their progress throughout the semester, and ultimately learning which student team is named “Best in Class” at the IBM InterConnect conference in February! + +This challenge is just the beginning. Universities may become members of the OpenPOWER Foundation for free to take advantage of the industry momentum, engage in technical work groups and strategic initiatives, and more. To find out more, visit the OpenPOWER Foundation. diff --git a/content/blog/mellanox-and-the-openpower-ecosystem-to-help-generate-economic-growth.md b/content/blog/mellanox-and-the-openpower-ecosystem-to-help-generate-economic-growth.md new file mode 100644 index 0000000..8d583f0 --- /dev/null +++ b/content/blog/mellanox-and-the-openpower-ecosystem-to-help-generate-economic-growth.md @@ -0,0 +1,20 @@ +--- +title: "Mellanox and the OpenPOWER EcoSystem to Help Generate Economic Growth" +date: "2015-06-04" +categories: + - "blogs" +tags: + - "featured" +--- + +![Scot Schlultz](images/Scot-Schlultz.jpg) + +In the latest announcement _(**[UK Government Invests £115 Million in Big Data and Cognitive Computing Research with STFC and IBM](https://www-03.ibm.com/press/us/en/pressrelease/47056.wss)**)_, the [STFC Hartree Center](http://www.stfc.ac.uk/2512.aspx) is setting out to enable the latest in world-class, state-of-the-art technologies for the development of advanced software solutions to solve real world challenges in academe, industry and Government and tackle the ever growing issues of big data. + +The architecture will include POWER CPUs from IBM, the latest in flash-memory storage, GPU’s from Nvidia and of course the most advanced networking technology from Mellanox.  Enhanced with native support for CAPI technology, and network-offload acceleration capabilities, the Mellanox interconnect will rapidly shuttle data around the system in the most effective and efficient manner to keep the cores focused on crunching the data; not on processing network communications. + +Since the inception of the OpenPOWER Foundation, Mellanox has been an active Platinum member with shared goals to collaborate with technology leaders and end users around the world to develop hardware and software solutions that are far superior in tackling the ever changing complexities of today’s problem-sets. + +Hartree Centre, a well-established source of innovation with leading computational scientists, data scientists and software developers will now have the leading edge capabilities to help them produce better outcomes to the challenges they tackle every day. For example, the Hartree Centre is already helping businesses like Unilever and Glaxo SmithKline use high performance computing to improve the stability of home products such as fabric softeners and to pinpoint links between genes and diseases. + +We are excited for this latest collaboration and look forward to the great work that is to come from Hartree Centre as well as the OpenPOWER Foundation. diff --git a/content/blog/micron-technology-joins-openpower-foundation-as-a-platinum-member.md b/content/blog/micron-technology-joins-openpower-foundation-as-a-platinum-member.md new file mode 100644 index 0000000..88397fe --- /dev/null +++ b/content/blog/micron-technology-joins-openpower-foundation-as-a-platinum-member.md @@ -0,0 +1,9 @@ +--- +title: "Micron Technology Joins OpenPOWER Foundation as a Platinum Member" +date: "2014-03-27" +categories: + - "press-releases" + - "blogs" +--- + +BOISE, Idaho, March 27, 2014 (GLOBE NEWSWIRE) -- Micron Technology, Inc. (Nasdaq:MU), a leading provider of advanced memory and storage solutions for enterprise data centers and high-performance computing applications, today announced their platinum membership with the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture. diff --git a/content/blog/microsemi-joins-the-openpower-foundation-with-focus-on-data-center-security.md b/content/blog/microsemi-joins-the-openpower-foundation-with-focus-on-data-center-security.md new file mode 100644 index 0000000..666637e --- /dev/null +++ b/content/blog/microsemi-joins-the-openpower-foundation-with-focus-on-data-center-security.md @@ -0,0 +1,34 @@ +--- +title: "Microsemi Joins the OpenPOWER Foundation With Focus on Data Center Security" +date: "2015-12-11" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" + - "blogs" +--- + +### Company Expands Market Focus Using its Award-Winning FPGAs to Enable Security, Storage and Computer Acceleration + +ALISO VIEJO, Calif., Dec. 11, 2015 /[PRNewswire](http://www.prnewswire.com/)/ -- **Microsemi Corporation** (Nasdaq: MSCC), a leading provider of semiconductor solutions differentiated by power, security, reliability and performance, today announced the company has joined the OpenPOWER Foundation, an open technical membership organization based on IBM's POWER microprocessor architecture founded in 2013 to enable data centers to rethink their approach to technology. Microsemi will collaborate with IBM and other OpenPOWER Foundation members to leverage its expertise in developing highly secure field programmable gate arrays ([FPGAs](http://www.microsemi.com/products/fpga-soc/soc-fpga/smartfusion2%20and%20http:/www.microsemi.com/products/fpga-soc/fpga/igloo2-fpga)), with an emphasis on moving toward more accelerated computing technology capabilities for data centers where timing, security and networking solutions are vital. + +![Microsemi Corporation. ](http://photos.prnewswire.com/prnvar/20110909/MM66070LOGO "Microsemi Corporation. ") + +"We are pleased to join the OpenPOWER Foundation, which allows Microsemi to focus on the expansion of our data center security capabilities and further elevate our solution offerings in this growing market," said Amr Elashmawi, Microsemi vice president of corporate and vertical marketing. "Our deep [expertise in security](http://www.microsemi.com/product-directory/services/3606-security-center-of-excellence-scoe) and networking solutions allows us a significant advantage as we educate the data center community on how our innovative FPGAs and our networking solutions enable data center security, storage and computer acceleration." + +As a new member of the nonprofit, Microsemi joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology as well as industry-leading open source software aimed at delivering more choice, control and flexibility to developers of next generation, hyperscale and cloud data centers. The group makes power hardware and software available to open development for the first time, and makes power intellectual property (IP) licensable to others, greatly expanding the ecosystem of innovators on the platform. + +As part of its [OpenPOWER Foundation silver membership](https://openpowerfoundation.org/membership/current-members/), Microsemi will focus on changing the use model for FPGAs in the computing market with FPGA-based security and acceleration technologies designed for next generation data centers. In addition, as Microsemi continues to roll out new networking products that operate at up to 25Gbps data rates, supporting important interoperability capabilities, Microsemi's goals align with the objectives of OpenPOWER's 25G IO Interoperability Mode Work Group. The company currently offers the industry's most secure FPGAs with differential power analysis (DPA)-certified countermeasures and layered cryptographic controls, allowing for true supply chain assurance and system authentication. By applying Microsemi's FPGA and security technology in the datacenter, operators and developers can deploy proprietary IP acceleration solutions while minimizing their risk of IP compromise. + +"The development model of the OpenPOWER Foundation is ideal for industry-leading companies like Microsemi, as it elicits collaboration and represents a new way in exploiting and innovating around processor technology," said Calista Redmond, Director of OpenPOWER Global Alliances at IBM. "The Foundation will benefit greatly from Microsemi's FPGA expertise and its ability to provide both security and performance for the future demands of data centers and cloud computing." + +The OpenPOWER Foundation includes a group of industry-leading companies working together to develop high-performance computing solutions based upon IBM's POWER architecture. Members include IBM, Google, Mellanox, Micron, NVIDIA, Samsung, Tyan, Xilinx, ZTE and dozens of esteemed higher education institutions. For more information, visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org/). + +**About Microsemi** Microsemi Corporation (Nasdaq: MSCC) offers a comprehensive portfolio of semiconductor and system solutions for communications, defense & security, aerospace and industrial markets. Products include high-performance and radiation-hardened analog mixed-signal integrated circuits, FPGAs, SoCs and ASICs; power management products; timing and synchronization devices and precise time solutions, setting the world's standard for time; voice processing devices; RF solutions; discrete components; security technologies and scalable anti-tamper products; Ethernet solutions; Power-over-Ethernet ICs and midspans; as well as custom design capabilities and services. Microsemi is headquartered in Aliso Viejo, Calif., and has approximately 3,600 employees globally. Learn more at [www.microsemi.com](http://www.microsemi.com/). + +Microsemi and the Microsemi logo are registered trademarks or service marks of Microsemi Corporation and/or its affiliates. Third-party trademarks and service marks mentioned herein are the property of their respective owners. + +"Safe Harbor" Statement under the Private Securities Litigation Reform Act of 1995: Any statements set forth in this news release that are not entirely historical and factual in nature, including without limitation statements related to it joining the OpenPOWER Foundation, and its potential effects on future business, are forward-looking statements. These forward-looking statements are based on our current expectations and are inherently subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements. The potential risks and uncertainties include, but are not limited to, such factors as rapidly changing technology and product obsolescence, potential cost increases, variations in customer order preferences, weakness or competitive pricing environment of the marketplace, uncertain demand for and acceptance of the company's products, adverse circumstances in any of our end markets, results of in-process or planned development or marketing and promotional campaigns, difficulties foreseeing future demand, potential non-realization of expected orders or non-realization of backlog, product returns, product liability, and other potential unexpected business and economic conditions or adverse changes in current or expected industry conditions, difficulties and costs of protecting patents and other proprietary rights, inventory obsolescence and difficulties regarding customer qualification of products. In addition to these factors and any other factors mentioned elsewhere in this news release, the reader should refer as well to the factors, uncertainties or risks identified in the company's most recent Form 10-K and all subsequent Form 10-Q reports filed by Microsemi with the SEC. Additional risk factors may be identified from time to time in Microsemi's future filings. The forward-looking statements included in this release speak only as of the date hereof, and Microsemi does not undertake any obligation to update these forward-looking statements to reflect subsequent events or circumstances. + +Logo - [http://photos.prnewswire.com/prnh/20110909/MM66070LOGO](http://photos.prnewswire.com/prnh/20110909/MM66070LOGO) diff --git a/content/blog/minicloud-free-openpower-cloud.md b/content/blog/minicloud-free-openpower-cloud.md new file mode 100644 index 0000000..23bd0d5 --- /dev/null +++ b/content/blog/minicloud-free-openpower-cloud.md @@ -0,0 +1,39 @@ +--- +title: "Minicloud: The FREE OpenPower Cloud by University of Campinas" +date: "2018-01-30" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "power" + - "openpower-foundation" + - "minicloud" + - "cloud" + - "unicamp" + - "high-performance-computing" +--- + +Minicloud aims to bring POWER architecture to everyone free of charge. + +_(Yes, you read that right: FREE of charge.)_ + +[![MiniCloud - the free OpenPOWER Cloud by Unicamp](images/MiniCloud.png)](https://openpowerfoundation.org/wp-content/uploads/2018/01/MiniCloud.png) + +## Minicloud Services + +Minicloud provides two basic services, virtual machines and job scheduling. + +Voice machines are aimed at general purpose usage, allowing users to have full control over the resources allocated for them and over the machines created. The job scheduling service allows users to submit a task requiring a large amount of processing to be run during an exclusive time slot on a bare metal machine, a POWER System S822LC for High Performance Computing. + +The infrastructure is hosted at University of Campinas (Unicamp), which is the result of the partnership with IBM that has lasted over a decade. This partnership boosted the presence of Unicamp as an academic member of the OpenPOWER Foundation, and Unicamp became the first academic member of the OpenPOWER Foundation in Latin America. + +Beginning in 2014, Minicloud offered users a manually created virtual machine running on PowerKVM and supported by a set of IBM POWER8 scale-out servers. Asthe service grew, it became necessary to implement a more autonomous solution. We currently use OpenStack, allowing users to launch and destroy virtual machines on their own once their request is approved. With this solution, much of the job is automatically done, such as load balancing among the servers, limiting user usage, control access to the voice machines and more + +In 2016, this Minicloud project won "Best in Show" at the "Innov8 with POWER8 University Challenge." To this day, over 2000 instances have been launched on Minicloud catering to a range of needs, from the curious exploitation of the POWER processor using Linux, to supporting open source communities on their development. + +Minicloud's instances can be created using a variety of flavors (amount of vCPUs, RAM, and disk) and Linux distros. Mincloud currently supports Debian, Ubuntu, Fedora and CentOS, all on ppc64le and some also on ppc64. The machines are ready to be used a few seconds after launching, via SSH. Most users at Minicloud work on academic projects or open source software. We proudly provided infrastructure for projects like Zabbix, HHVM, RocksDB, GDB, Debian, Glibc, and several others. + +In the future, we plan to add a validation service. This will consist of regularly building and testing open source projects using our infrastructure to detect compatibility issues or inconsistent behavior when running on POWER. + +Those who are interested in using Minicloud can request access at [http://openpower.ic.unicamp.br/minicloud/](http://openpower.ic.unicamp.br/minicloud/). Every request is individually reviewed and may take a few days to process. We are looking forward helping you expand your POWER. diff --git a/content/blog/national-university-of-singapore-develops-hybrid-cooling-for-sustainable-efficient-data-centres.md b/content/blog/national-university-of-singapore-develops-hybrid-cooling-for-sustainable-efficient-data-centres.md new file mode 100644 index 0000000..219025d --- /dev/null +++ b/content/blog/national-university-of-singapore-develops-hybrid-cooling-for-sustainable-efficient-data-centres.md @@ -0,0 +1,52 @@ +--- +title: "National University of Singapore Develops Hybrid Cooling for Sustainable, Efficient Data Centres" +date: "2019-02-13" +categories: + - "blogs" +tags: + - "featured" +--- + +By [Lee Poh Seng](http://blog.nus.edu.sg/mtsgroup/people/), associate professor, department of mechanical engineering, national University of Singapore + +The [National University of Singapore](http://nus.edu.sg/) and our [Micro Thermal Systems Group](http://blog.nus.edu.sg/mtsgroup/) joined the OpenPOWER Foundation as an academic member to contribute our R&D expertise on electronics cooling and thermal solutions. With an experienced team comprising researchers, engineers and entrepreneurs, as well as nearly two decades of research practice in this area, we are confident that OpenPOWER Foundation will help us in the development of our solutions. + +### **Sustainable Green Data Centre** + +The current data centre landscape in Singapore is set to grow exponentially, especially after the opening of the China Mobile International Data Centre and of Facebook announcing their intention to set up a billion-dollar data centre locally. Furthermore, the recent declaration of a joint venture between Keppel Group and Salim Group to set up a data centre in Indonesia signals a growing confidence in the data centre expertise in the ASEAN region. + +### **Novel Oblique-fin Heat Sink** + +Our novel oblique fins utilise secondary flow to regenerate and disrupt thermal boundary layer. This improves fluid mixing and heat transfer rate. Based on comparative studies with the EK Supremacy EVO water block, our oblique fins sustained a reduction of temperature for about 2.5°C. In other words, oblique fin uses one-third of pumping power to achieve same bottom chip temperature. We have successfully cooled 350-400 W/cm2 high heat flux (chip-size) using the oblique-fin heat sinks. + +![](images/Singapore-Blog.png) + +### **Hybrid Cooling Solution** + +The oblique-fin heat sinks are employed on our hybrid cooling solution. This cooling system involves the decoupling of the cooling load in a server based on the heat flux of its components. Active electronic components will be cooled using high performance liquid/two-phase cold plates while other auxiliary components will be thermally managed by rack-level air cooling, thus supporting higher ambient temperature operations and significantly reducing energy consumption. + +![](images/Singapore-Blog-2.png) + +### **Triple-fluid Heat Exchanger** + +The novel triple-fluid heat exchanger enables highly energy efficient hybrid cooling. Heat transfer occurs between supply water from dry cooler and server coolant, as well as between supply water and rack air. The triple-fluid heat exchanger can be fitted to the rear of the rack as a rear-door heat exchanger. As such, it enables high ambient temperature operations and totally eliminates the need for chilling and is truly free cooling. + +![](images/Singapore-Blog-3.png) + +### **Ultra-high Density Data Centre** + +The ultra-high density data centre concept is achieved due to the coupling of the hybrid cooling system and the triple-fluid heat exchanger for high ambient temperature operations. Since this hybrid cooling system does not require raised floor and overhead plenum, this space can be utilised to place taller racks (e.g. 60U) depending on the floor loading and height limit. Compared to the conventional racks (e.g. 42U and 47U) which have power density up to 80 kW to 90 kW per rack respectively, the ultra-tall server rack allows the server density to go much higher (e.g. 115 kW per rack). This implementation is more suitable for brownfield data centres. + +For greenfield data centres, it will be possible to have shorter floor height with the removal of the overhead plenum for supply air and raised floor for return air. As a result, multi-story data centres can now be significantly denser, i.e. more real estate is made available for accommodating additional server racks instead of wasted space for piping and ducting networks. + +![](images/Singapore-Blog-4-1024x511.png) + +### **Schematic of Cooling System** + +The integration of the above components result in a data centre which has better energy efficiency as cooling is catered for specific components on the server. System is operational in high ambient temperature, thus the need for chilling is eliminated. Removal of chiller and CRAC units leads to huge energy savings, which results in ultra-low PUE. + +![](images/Singapore-Blog-5-1024x467.png) + +Learn more about efficient hybrid cooling systems for high ambient temperature data centres in the video below. + +\[video width="640" height="352" mp4="http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2019/02/Blog-Video\_Singapore.mp4"\]\[/video\] diff --git a/content/blog/nci-australia-openpower-member.md b/content/blog/nci-australia-openpower-member.md new file mode 100644 index 0000000..72aea3d --- /dev/null +++ b/content/blog/nci-australia-openpower-member.md @@ -0,0 +1,42 @@ +--- +title: "Meet NCI Australia, the first OpenPOWER member based down under" +date: "2017-10-26" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "nci" + - "nci-australia" + - "australian-national-university" + - "high-powered-computing" + - "artificial-intelligence" +--- + +[NCI Australia](http://nci.org.au/) is Australia’s most highly integrated high-performance research computing environment, providing world-class services to government, industry, and researchers. + +[![NCI Australia](images/NCI-building-1024x683.jpg)](https://openpowerfoundation.org/wp-content/uploads/2017/10/NCI-building.jpg) + +NCI is based out the Australian National University and is home to the Southern Hemisphere’s fastest supercomputer. Additionally, NCI boasts its high-performance research cloud, the highest performance in Australia, the fastest file systems, and Australia’s largest research data repository. This is all supported by an expert team of NCI staff who are recognized both nationally and internationally. This team is a great addition to OpenPOWER, as NCI and OpenPOWER share the same goals in technological research. + +_“To be the first ever Australian organization to join the OpenPOWER Foundation provides recognition of NCI’s standing, and represents a step toward a more heterogeneous architecture,”_ said Allan Williams, Associate Director (Services and Technologies), NCI. + +## **Power Architecture for High-Powered Computing** + +NCI staff have been working with Australia’s scientific community to qualify a range of memory-intensive applications for use under IBM’s architecture. With this architecture and OpenPOWER membership, they are able research further innovations around computing architecture. + +Additionally, NCI is the first OpenPOWER member that has been able to merge both POWER8 and x86 architectures into the same scheduling system. The combination of POWER8 and x86 allows for unparalleled diversity and accessibility for researchers accessing high-powered computing (HPC) through NCI. + +[![NCI Australia](images/Raijin_dark3.jpg)](https://openpowerfoundation.org/wp-content/uploads/2017/10/Raijin_dark3.jpg) + +## **The Intersection of AI and HPC** + +NCI has been working to acquire various IBM Power System nodes to add to their existing HPC infrastructure. This includes the Raijin system, a supercomputer hybrid which features Fujitsu Primergy and Lenovo NeXtScale high-performance, distributed-memory cluster. + +NCI researchers are currently utilizing nodes due to their capacity for memory bandwidth. The bandwidth is so significant that by itself, it provides a large performance advantage for some of their applications. As familiarity with the technology increases, IBM Power Systems nodes will present an opportunity for researchers, like those using NCI’s Raijin, to explore the intersection of AI and HPC across a wide range of scientific applications. + +The intersection of AI and HPC are increasingly important to supercomputing, and the IBM Power Systems design presents to clients a single package of HPC and AI capabilities. + +NCI will be able to introduce Australia’s first fully heterogeneous open architecture solution to support the needs of Australian researchers. The open architecture solution will include IBM Power Systems for HPC technology into its data center, providing increased flexibility, optimization, and efficiency. + +To learn more about NCI Australia, visit their website [here](http://nci.org.au/) or follow them on Twitter [here](https://twitter.com/NCInews). diff --git a/content/blog/nec-acceleration-for-power.md b/content/blog/nec-acceleration-for-power.md new file mode 100644 index 0000000..474fde0 --- /dev/null +++ b/content/blog/nec-acceleration-for-power.md @@ -0,0 +1,38 @@ +--- +title: "NEC’s Service Acceleration Platform for Power Systems Accelerates and Scales Cloud Data Centers" +date: "2015-11-16" +categories: + - "blogs" +--- + +By Shinji Abe, Senior Manager for IT Platform Division of NEC + +As usage of the cloud expands, cloud data centers will need to be able to accommodate a wide range of services, from office applications to on-premises services and in the future: the Internet of Things (IoT). To meet these needs, the modern data center requires the ability to simultaneously handle multiple demands for data storage, networks, numerical analysis, and image processing from various users. + +NEC’s new Service Acceleration Platform addresses this need by working at the device level to assign resources to perform computation and scale up individual performance and functionality. Unifying standard hardware and software components, the Service Acceleration Platform delivers faster, more powerful, and more reliable computing solutions that meet customer performance demands. + +## What is ExpEther? + +The architecture of the Service Acceleration Platform is based on NEC-developed ExpEther technology ([http://www.expether.org/index.html](http://www.expether.org/index.html)).The ExpEther Technology can extend PCI Express beyond the confines of a computer chassis via Ethernet without any modification of existing hardware and software. Computer resources can be added on standard Ethernet fabric the same as adding into the chassis to provide scale-up flexibility. ExpEther can build a new type of computing environment without physical constraints, and is cost effective with the use of standard Ethernet. + +[![image 1](images/image-1-1024x386.png)](https://openpowerfoundation.org/wp-content/uploads/2015/11/image-1.png)The CPU views the ExpEther engine as a PCI Express Switch rather than Ethernet. This means that the ExpEther is a implementation of PCI Express Switch, so it is fully compatible with PCI Express Spec. + +## Service Acceleration Platform + +In IoT data processing, various data with various characteristics are generated from the physical inputs. To accelerate the processing of these inputs, various accelerator engines are necessary depending on the workload. + +[![image 2](images/image-2-1024x481.png)](https://openpowerfoundation.org/wp-content/uploads/2015/11/image-2.png)In NEC’s Service Acceleration Platform, all IO devices are disaggregated by ExpEther. The platform can flexibly configure versatile systems with the needed number of GPGPUs, Accelerator FPGAs and NVMe SSDs according to the workload. + +## CAPI Capable ExpEther Engine + +NEC extended the ExpEther functionality for CAPI compliance and confirmed that ExpEther technology can extend CAPI-attached devices remotely from the CPU via an Ethernet switch. + +## [![image 3](images/image-3-1024x271.png)](https://openpowerfoundation.org/wp-content/uploads/2015/11/image-3.png)Product Lineup + +NEC is currently shipping 1G and 10G version of ExpEther products and developing a high-performance version for demanding environments and workloads. + +[![image 4](images/image-4-1024x189.png)](https://openpowerfoundation.org/wp-content/uploads/2015/11/image-4.png) + +* * * + +_Shinji Abe is a Senior Manager for IT Platform Division of NEC Corporation in Tokyo, Japan. He is in charge of development of the Service Acceleration Platform with ExpEther technology._ diff --git a/content/blog/new-ai-demos-allow-you-to-test-how-gpgpu-technologies-interact-across-different-platforms.md b/content/blog/new-ai-demos-allow-you-to-test-how-gpgpu-technologies-interact-across-different-platforms.md new file mode 100644 index 0000000..a274c37 --- /dev/null +++ b/content/blog/new-ai-demos-allow-you-to-test-how-gpgpu-technologies-interact-across-different-platforms.md @@ -0,0 +1,28 @@ +--- +title: "New AI demos allow you to test how GPGPU technologies interact across different platforms" +date: "2019-12-12" +categories: + - "blogs" +tags: + - "openpower" + - "nvidia" + - "gpu" + - "nvlink" + - "artificial-intelligence" + - "ai" + - "center-for-genome-research-and-biocomputing" + - "oregon-state-university" + - "pci" +--- + +_This article was originally published by [IBM on its Power Developer Portal](https://developer.ibm.com/linuxonpower/2019/12/06/new-ai-demos-allow-you-to-test-how-gpgpu-technologies-interact-across-different-platforms/)._ _Oregon State University is a member of the OpenPOWER Foundation._ + +  + +The [Center for Genome Research and Biocomputing](https://cgrb.oregonstate.edu/) (CGRB) at Oregon State University works closely with hardware vendors to test different configurations. Many of these configurations push the limits of processing hardware because they are used for cutting-edge research across a gamut of disciplines. Through the process of working with NVIDIA general-purpose computing on graphics processing unit (GPGPU) technologies, we realized a difference around architecture such as SXM2, NVLink and PCI interconnections, and PPC64LE and x86. Changing these architecture interactions helps remove data congestion through the bus when interacting with GPGPU technologies. For example, SXM2 with NVLink on the PPC64LE system can use system memory with GPGPU hardware where SXM2 with NVLink on x86 provides GPGPU to GPGPU throughput but no access to system memory. + +Through extensive testing using the same GPGPU located in different systems with different hardware architectures, while processing, we found a set of pathways that can reduce time and change the scope of work. Working with Tech Data and other hardware vendors, we are able to provide a set of AI demos for users who want to test how GPGPU technologies interact across different systems. The demo runs real algorithms used for research on real systems in real time. For example users will be able to watch a recently published tool used to identify owls in the forest from sound ([https://doi.org/10.1002/rse2.125](https://doi.org/10.1002/rse2.125)) run classification of over 100,000 images on two different architectures at the same time. This research generally produces hundreds of terabytes of data per season to process so finding the best architecture for pushing data through the GPGPU was important. During each demo users are provided different information about how the systems are performing like GPU load, GPU memory usage and GPU throughput. There are several different demos each showing different types of interactions with GPGPU technologies. Because this is a resource that runs in real time on real systems, we ask users to sign up for a time slot to access the demo. + +Go to the demo portal page to get started: + +[http://aidemo.cgrb.oregonstate.edu/](http://aidemo.cgrb.oregonstate.edu/) diff --git a/content/blog/new-executive-director-selected-to-lead-openpower-foundation.md b/content/blog/new-executive-director-selected-to-lead-openpower-foundation.md new file mode 100644 index 0000000..62aa4eb --- /dev/null +++ b/content/blog/new-executive-director-selected-to-lead-openpower-foundation.md @@ -0,0 +1,36 @@ +--- +title: "New Executive Director Selected to Lead OpenPOWER Foundation" +date: "2020-04-30" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "hugh-blemings" + - "james-kulina" + - "hyper-sh" +--- + +\[caption id="attachment\_5638" align="alignleft" width="150"\]![](images/Hugh-1-150x150.jpg) "It’s been an honour and joy to have been part of the OpenPOWER ecosystem as Executive Director of the Foundation." - Hugh Blemings\[/caption\] + +I had the good fortune to join the OpenPOWER Foundation as Executive Director in late 2017. Over the ensuing 2.5 years I’ve worked with some fantastic people both within our Membership and of course the broader OpenPOWER Ecosystem. Some of these folk are new to me, others ones I’ve known through the broader open technical commons for over 20 years. + +I’ve also been privileged to be a part of the team that has brought about some of the most significant steps in the OpenPOWER journey, not least of which the [opening of the POWER ISA](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) in August 2019. + +Over the last few months I have found it more difficult to do justice to the ED role. The root causes are personal in nature and thankfully transient and now fully behind me. These challenges provided an opportunity to reflect, and I determined that a change is in order.  This coupled with the appearance of a fantastic potential successor made the correct decision clear. + +Accordingly I will step down as Executive Director of the OpenPOWER Foundation on 1 June 2020. I indicated to the Board my desire to ensure as seamless a transition as possible and am delighted to be able to stay on as an Advisor to the Board after the transition concludes. + +While I have not identified a specific next step personally, I can say this: I will be staying in the Open Source space, I will _not_ be undertaking a role that conflicts with the OpenPOWER mission and that I intend returning to people/engineering team management which is where my heart lies. + +Anyways, enough about me! :) + +I am delighted to advise that my successor is New York-based [James Kulina](https://www.linkedin.com/in/james-kulina/). James was most recently Chief Operating Officer at Hyper.sh and comes from a background in open source enterprise software and hardware and has been interested in OpenPOWER for many years. + +He is uniquely well placed, having extensive experience in both startup and enterprise settings, to guide OpenPOWER and the OpenPOWER Foundation through the many exciting opportunities before us and also happens to be a super nice chap! James will be reaching out shortly in a follow on blog post to introduce himself further. + +It’s been an honour and joy to have been part of the OpenPOWER ecosystem as ED of the Foundation. We have a bright future to look forward to, and I look forward to continuing to be involved for many years to come as a member of the community. + +In closing I wish all of you personally and professionally the very best. + +\-Hugh diff --git a/content/blog/new-whitepaper-hpc-and-hpda-for-the-cognitive-journey-with-openpower.md b/content/blog/new-whitepaper-hpc-and-hpda-for-the-cognitive-journey-with-openpower.md new file mode 100644 index 0000000..bca377e --- /dev/null +++ b/content/blog/new-whitepaper-hpc-and-hpda-for-the-cognitive-journey-with-openpower.md @@ -0,0 +1,22 @@ +--- +title: "New Whitepaper: HPC and HPDA for the Cognitive Journey with OpenPOWER" +date: "2016-06-08" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Dr. Srini Chari, Managing Partner, Cabot Partners_ + +![5](images/5-1024x512.jpg)I’m pleased to announce the publication of Cabot Partners’ new Whitepaper, [HPC and HPDA for the Cognitive Journey with OpenPOWER](http://ibm.co/29wENZQ).  An update to last year’s [Crossing the Performance CHASM with OpenPOWER](https://openpowerfoundation.org/blogs/crossing-the-performance-chasm-with-openpower/), our latest analysis captures the progress and continuing momentum of the OpenPOWER Foundation and the evolution of its accelerated computing roadmap. In particular, we find the data-centric design of OpenPOWER systems to be uniquely suited to deliver on the next game-changing business opportunity – the convergence of HPC and Big Data Analytics across enterprise and research computing, coupled with the power of Deep Learning to derive the next level of value in today’s data-fueled economy. + +Whether you’re a business striving to improve customer experience and loyalty or a research institution in pursuit of increasingly valuable results, adopting infrastructure designed to learn from its own information and return insightful, actionable results – a Cognitive infrastructure – will become the key to advancement. With real-world examples across multiple industries, we highlight the need for clients to evaluate systems based on performance of complete, data-driven workflows and show how adopting high-value offerings from the OpenPOWER Foundation featuring data-centric design can help lower costs and accelerate time-to-insight on your journey to becoming a Cognitive business. + +Read the Executive Summary and download the full version [here](http://ibm.co/29wENZQ). + +* * * + +[![](images/Chari.png)](https://openpowerfoundation.org/wp-content/uploads/2016/02/mkg.jpeg) + +_Dr. Srini Chari has over 30 years of experience in information technology and emerging computing technologies. His current focus areas include high performance computing, analytics and cloud computing. Prior to co-founding the IT Analyst firm Cabot Partners, Srini was President and CEO of TurboWorx, Inc., a bioinformatics software company providing workflow solutions for grids and clusters._ diff --git a/content/blog/now-available-openpower-i-o-design-architecture-version-3-compliance-test-harness-and-test-suite-specification.md b/content/blog/now-available-openpower-i-o-design-architecture-version-3-compliance-test-harness-and-test-suite-specification.md new file mode 100644 index 0000000..c1b30bc --- /dev/null +++ b/content/blog/now-available-openpower-i-o-design-architecture-version-3-compliance-test-harness-and-test-suite-specification.md @@ -0,0 +1,21 @@ +--- +title: "Now Available: OpenPOWER I/O Design Architecture Version 3 Compliance Test Harness and Test Suite Specification" +date: "2020-05-11" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "power9" + - "openpower-i-o-design-architecture" +--- + +_By Sandy Woodward, OpenPOWER Foundation Compliance Work Group Chair, IBM Academy of Technology Member_ + +The Compliance Work Group recently completed the [OpenPOWER I/O Design Architecture, version 3 (IODA3) Compliance Test Harness and Test Suite (TH/TS) Specification](https://openpowerfoundation.org/?resource_lib=i-o-design-architecture-ioda3-compliance-test-harness-and-test-suite-th-ts-review-draft). The input to the compliance specification is the [OpenPOWER I/O Design Architecture, version 3 (IODA3) Specification](https://openpowerfoundation.org/?resource_lib=openpower-io-design-architecture-ioda-specification-review-draft) which describes the chip architecture for key aspects of PCI Express® (PCIe)-based host bridge (PHB) designs for POWER9™ systems. This specification defines the PHB hardware and firmware requirements for the functions shown in the following diagram. + +![](images/OpenPOWER-IO-Design-Architecture-v3-Specification.png) + +The purpose of the OpenPOWER IODA3 Compliance TH/TS Specification is to provide the test suite requirements to be able to demonstrate OpenPOWER IODA3 compliance for POWER9™ systems. It describes the required and optional tests needed in the test suite to ensure compliance and compatibility for each of the functions in the previous diagram. + +Both documents are available on the OpenPOWER Foundation Technical Resources web page. Comments and questions on the IODA3 specification can be submitted to the public mailing list at [hwarch-ioda@mailinglist.openpowerfoundation.org](mailto:hwarch-ioda@mailinglist.openpowerfoundation.org). If you have comments you would like to make on the OpenPOWER IODA3 Compliance TH/TS Specification, you can submit them to the Compliance Work Group public mailing list at [openpower-ioda-thts@mailinglist.openpowerfoundation.org](mailto:openpower-ioda-thts@mailinglist.openpowerfoundation.org). diff --git a/content/blog/nvidia-tesla-accelerated-computing-platform-for-ibm-power.md b/content/blog/nvidia-tesla-accelerated-computing-platform-for-ibm-power.md new file mode 100644 index 0000000..c14a602 --- /dev/null +++ b/content/blog/nvidia-tesla-accelerated-computing-platform-for-ibm-power.md @@ -0,0 +1,18 @@ +--- +title: "NVIDIA Tesla Accelerated Computing Platform for IBM Power" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Abstract + +Learn how applications can be accelerated on IBM Power8 systems with NVIDIA® Tesla® Accelerated Computing Platform, the leading platform for accelerating big data analytics and scientific computing. The platform combines the world's fastest GPU accelerators, the widely used CUDA® parallel computing model, NVLink, high-speed GPU interconnect to power supercomputers, and a comprehensive ecosystem of software developers, software vendors, and datacenter system OEMs to accelerate discovery and insight. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Ashley-John_OPFS2015_NVIDIA_031215.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/oak-ridge-leadership-computing-facility-enables-breakthrough-science.md b/content/blog/oak-ridge-leadership-computing-facility-enables-breakthrough-science.md new file mode 100644 index 0000000..809ab82 --- /dev/null +++ b/content/blog/oak-ridge-leadership-computing-facility-enables-breakthrough-science.md @@ -0,0 +1,56 @@ +--- +title: "Oak Ridge Leadership Computing Facility Enables Breakthrough Science" +date: "2020-01-02" +categories: + - "blogs" +tags: + - "deep-learning" + - "machine-learning" + - "artificial-intelligence" + - "oak-ridge-national-laboratory" + - "jack-wells" + - "oak-ridge-leadership-computing-facility" + - "neural-network" +--- + +By: [Jack Wells](https://www.olcf.ornl.gov/directory/staff-member/jack-wells/), Director of Science, Oak Ridge National Laboratory National Center for Computational Sciences + +The [Oak Ridge Leadership Computing Facility](https://www.olcf.ornl.gov/about-olcf/) was established at Oak Ridge National Laboratory over [25 years ago](https://www.youtube.com/watch?v=CDfANp9ZE9k). We set out on a mission to accelerate scientific discovery and engineering progress by providing world-leading computational performance and advanced data infrastructure. + +One key to our success in this mission has been our partnership with the OpenPOWER Foundation. Collaboration between industry leaders including IBM Power Systems, Nvidia, Mellanox and more enabled the creation of [Summit](https://www.olcf.ornl.gov/summit/), the world’s most powerful supercomputer since June 2018. + +As director of science for the Oak Ridge Leadership Computing Facility, it’s been a joy to oversee the scientific outcomes of our user program, many of which are using groundbreaking artificial intelligence and deep learning technologies, and have incredible potential to improve the world as we know it. Don’t just take my word for it; learn more about four research projects that have each been conducted on Summit below. + +## ![OLCF Systems Enable Breakthrough Science](images/OLCF-image.jpg)**Deep Learning Expands Study of Nuclear Waste Remediation** + +A team from Lawrence Berkeley National Laboratory, Pacific Northwest National Laboratory and NVIDIA has achieved exaflop performance on Summit with a deep learning application used to model subsurface flow in the study of nuclear waste remediation. This work demonstrates the promise of physics-informed generative adversarial networks (GANs) for analyzing complex, large-scale science problems. + +Results from the study were presented at SC19. [Learn more about the project here](https://cs.lbl.gov/news-media/news/2019/deep-learning-expands-study-of-nuclear-waste-remediation-2/). + +## **Artificial Intelligence Approach Points to Bright Future for Fusion Energy** + +A team of researchers led by [Bill Tang](https://plasma.princeton.edu/people/william-m-tang) of the Princeton Plasma Physics Laboratory and Princeton University tested their Fusion Recurrent Neural Network (FRNN) code on Titan and Summit. Using neural networks, FRNN identifies patterns in plasma behavior to quickly and accurately predict disruptions in fusion reactors. + +According to Tang, “with powerful predictive capabilities, we can move from disruption prediction to control, which is the holy grail in fusion. It’s just like in medicine - the earlier you can diagnose a problem, the better chance you have of solving it.” [Learn more about this project here](https://www.olcf.ornl.gov/2019/07/22/artificial-intelligence-approach-points-to-bright-future-for-fusion-energy/). + +## **AI for Plant Breeding in an Ever-changing Climate** + +[Dan Jacobson](https://www.ornl.gov/staff-profile/daniel-jacobson), a research and development staff member in the Biosciences Division at Oak Ridge National Laboratory and his team is currently working on numerous projects that form an integrated roadmap for the future of AI in plant breeding and bioenergy. They recently developed a new genomic selection algorithm driven by explainable AI and expanded to a global scale the climate and environmental information that can be used in the Combinatorial Metrics, or CoMet, code. + +[You can find a Q&A with Jacobson on the project here](https://www.olcf.ornl.gov/2019/11/13/ai-for-plant-breeding-in-an-ever-changing-climate/). + +## **In the Fight Against Cancer, ORNL and Stony Brook Cancer Center Enlist and Advanced Neural Network** + +Using the MENNDL code on Summit, an ORNL team has created a multi-objective neural network that can speed up cancer pathology research by using neural networks that can quickly and accurately analyze biopsy slide images on a scale that microscope-equipped pathologists could never completely tackle. + +According to [Joel Saltz](https://www.cs.stonybrook.edu/people/faculty/JoelSaltz), chair of the Department of Biomedical Informatics and associate director of the Stony Brook Cancer Center, “tumors are a little like stealth aircraft - they manage to actively confuse the patient’s immune system in order to not be recognized and killed.” [Read more about the project here](https://www.olcf.ornl.gov/2019/12/16/in-the-fight-against-cancer-ornl-and-stony-brook-cancer-center-enlist-an-advanced-neural-network/). + +## First MD Simulation Trajectories Transformed into Images Recognized by Deep Learning Technology + +A team led by [Harel Weinstein](https://physiology.med.cornell.edu/people/harel-weinstein-d-sc/), D.Sc. took 3D visual representations of molecular dynamics data and transformed them into 2D picture-like representations and then trained a convolutional neural network to analyze and predict the class labels of the drugs or ligands that bind to two specific serotonin and dopamine receptors in humans with near-perfect accuracy. + +The study builds a framework for the efficient computational analysis of MD big data collected for the purpose of understanding ligand-specific GPCR activity. [Read more on the study here.](https://www.olcf.ornl.gov/2020/02/21/machine-learning-for-better-drug-design/) + +**Which of these five projects do you believe contains the most significant potential to impact the world? I would love to hear your perspective in the comments section below!** + +_\*\*Note: Post updated to include final project, which was completed in February 2020._ diff --git a/content/blog/occ-firmware-code-is-now-open-source.md b/content/blog/occ-firmware-code-is-now-open-source.md new file mode 100644 index 0000000..981f654 --- /dev/null +++ b/content/blog/occ-firmware-code-is-now-open-source.md @@ -0,0 +1,31 @@ +--- +title: "OCC Firmware Code is Now Open Source" +date: "2014-12-19" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +_by Todd Rosedahl, Chief Energy Management Engineer on POWER_ + +Today, IBM has released another key piece of infrastructure to the OpenPOWER community. The firmware that runs on the On Chip Controller (OCC), along with the host code that loads and initializes it, has been open sourced. The OCC provides access to detailed chip temperature, power, and utilization data, as well as complete control of processor frequency, voltage, and memory bandwidth. This enables customization for performance and energy management, or for maintaining system reliability and availability. Partners now have the flexibility to create innovative power, thermal, and performance solutions on POWER systems. + +[![2014_12_15_OCC_Chart](images/2014_12_15_OCC_Chart.jpg)](https://openpowerfoundation.org/wp-content/uploads/2014/12/2014_12_15_OCC_Chart.pdf) + +The OCC is a separate 405 processor that is embedded directly on the chip along with the main POWER processor cores. It has its own dedicated 512K SRAM, access to main memory, and 2 dedicated General Purpose off-load Engines (called GPEs). The main firmware runs a 250usec loop that utilizes the GPEs to continuously collect system power data by domain, processor temperatures, memory temperatures, and processor utilization data. The firmware communicates with the open source OpenPOWER Abstraction Layer (OPAL) stack via main memory. In conjunction with the operating system, it uses the data collected to determine the proper processor frequency and memory bandwidth to enable the following functions: + +**Performance Boost** The POWER processors can be set to frequencies above nominal. The OCC monitors the system and controls the processor frequency and memory bandwidth to keep the system thermally safe and within acceptable power limits. + +**Power Capping** A system power limit can be set. The OCC will continually monitor the power consumption and will reduce the allowed processor frequency to maintain that power limit. + +**Energy Saving** When the system utilization is low, the OCC infrastructure can be used to put the system into a low power state. This function can be used to comply with various government idle power regulations and standards. + +**System Availability** The OCC supports a Quick Power Drop signal that can be used to respond to power supply failures or other system events that require a rapid power reduction. This function enables systems to run through component or data center power and thermal failures without crashing. + +**System Reliability** The OCC can be used to keep component temperatures within reliability limits, extending device lifetime and limiting service costs. + +**Performance per Watt tuning** As the system utilization varies, the OCC can provide monitoring information and frequency control that maximizes system performance per watt metrics. + +These basic functions can be implemented, enhanced, and expanded. Additionally, completely new functions can be developed using the OCC open source firmware and accompanying framework. See code at https://github.com/open-power/occ and documentation at [https://github.com/open-power/docs/tree/master/occ](https://github.com/open-power/docs/tree/master/occ) on GitHub for more information. For additional details, please reference the video at [https://www.youtube.com/watch?v=Z-4Q0\_l9nt8&feature=youtu.be](https://www.youtube.com/watch?v=Z-4Q0_l9nt8&feature=youtu.be). diff --git a/content/blog/ohio-state-enhanced-power-support.md b/content/blog/ohio-state-enhanced-power-support.md new file mode 100644 index 0000000..2b38fbd --- /dev/null +++ b/content/blog/ohio-state-enhanced-power-support.md @@ -0,0 +1,49 @@ +--- +title: "The Ohio State University Announces Enhanced Support of Power Systems for High-Performance Computing" +date: "2018-02-22" +categories: + - "blogs" +tags: + - "openpower" + - "nvidia" + - "infiniband" + - "nvlink" + - "openpower-foundation" + - "ohio-state-university" + - "department-of-computer-science-and-engineering" + - "high-performance-mpi-and-deep-learning" + - "rsma-hadoop-library" + - "mvapich2" +--- + +By [Dhabaleswar](http://web.cse.ohio-state.edu/~panda.2/) K (DK) Panda, Professor and University Distinguished Scholar of Computer Science and Engineering, The Ohio State University + +The [Department of Computer Science and Engineering](https://cse.osu.edu/) at The Ohio State University has made major contributions to the field of high-performance computing for many years. Recently, our [Network Based Computing Lab](http://nowlab.cse.ohio-state.edu/) introduced two enhancements to further support the growth of computing on Power Systems. + +## **High-Performance MPI and Deep Learning on OpenPOWER** + +Our MVAPICH team now provides optimized support for OpenPOWER platforms with NVIDIA GPUs and NVLink to extract high-performance and scalability for MPI and Deep Learning applications. The latest MVAPICH2-GDR 2.3a release supports efficient CUDA IPC by exploiting multiple CUDA streams for multi-GPU systems with and without NVLink. + +Highlights of this release include: + +- Excellent MPI-level point-to-point communication for Device-to-Device (D-D), Device-to-Host (D-H) and Host-to-Host (H-H) paths, in addition to the CUDA-aware MPI design in the MVAPICH2-GDR library +- Unidirectional bandwidth up to 35,390 Mbytes/sec for intra-node D-D communication +- Bidirectional bandwidth up to 23,400 Mbytes/sec for inter-node D-D communication +- High-performance and scalable collective communication support for broadcast, reduce and all-reduce, the common collective operations in Deep Learning. + +These features provide novel ways to extract the highest performance and scalability on the emerging CORAL systems with OpenPOWER, NVIDIA GPUs and InfiniBand. + +More than 2,800 organizations in 85 countries already use the MVAPICH2 library, including Sunway TaihuLight, the #1 SuperComputer in the world. For more information on the MVAPICH2-GDR 2.3 library and its performance figures, please [visit our website](http://mvapich.cse.ohio-state.edu/). + +## **RDMA-Hadoop Library Empowering OpenPOWER** + +Our HiBD (High-Performance Big Data) team now provides optimized designs and support in the RDMA-Hadoop library for OpenPOWER platforms with the InfiniBand network. New designs and optimized techniques are included in the latest RDMA-Hadoop 2.x 1.3.0 library to exploit the OpenPOWER architecture. + +Highlights of this release include: + +- The proposed designs can achieve up to 2.26X performance improvement for Hadoop workloads, compared to the default designs running on OpenPOWER platforms. +- These features provide novel ways to extract the highest performance and scalability for big data workloads on the emerging OpenPOWER platforms with InfiniBand interconnect, such as upcoming CORAL systems. + +For more information on the RDMA-Hadoop 2.x 1.3.0 library and its performance figures, please [visit our website](http://hibd.cse.ohio-state.edu/). + +We are excited for the opportunities provided by both these recent releases and look forward to future improvements to high-performance computing leveraging Power Systems. diff --git a/content/blog/on-chip-controller-occ.md b/content/blog/on-chip-controller-occ.md new file mode 100644 index 0000000..5dd43c1 --- /dev/null +++ b/content/blog/on-chip-controller-occ.md @@ -0,0 +1,26 @@ +--- +title: "On Chip Controller (OCC)" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Objective + +Demonstrate POWER processor and memory capabilities that can be exploited using open source OCC firmware. + +### Abstract + +The On Chip Controller (OCC) is a co-processor that is embedded directly on the main processor die. The OCC can be used to control the processor frequency, power consumption, and temperature in order to maximize performance and minimize energy usage. This presentation will include an overview of the power, thermal, and performance data that the OCC can access as well as the various control knobs, including adjusting the processor frequency and memory bandwidth. Details about the OCC processor, firmware structure, loop timings, off-load engines, and bus accesses will be given along with descriptions of example algorithms, system settings, and potential future enhancements. + +### Speaker + +Todd Rosedahl. IBM Chief Power/Thermal/Energy Management Engineer on POWER. Todd has worked on power, thermal, and energy management for his entire 22yr career at IBM and has over 20 related patents. He led the firmware effort for the On Chip Controller (OCC) which recently was released as open source. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/RosedahlTodd_OPFS2015_IBM_-031615.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/one-click-hadoop-cluster-deployment-on-openpower-systems-running-kvm-and-managed-by-openstack.md b/content/blog/one-click-hadoop-cluster-deployment-on-openpower-systems-running-kvm-and-managed-by-openstack.md new file mode 100644 index 0000000..202e180 --- /dev/null +++ b/content/blog/one-click-hadoop-cluster-deployment-on-openpower-systems-running-kvm-and-managed-by-openstack.md @@ -0,0 +1,22 @@ +--- +title: "One-click Hadoop cluster deployment on OpenPower systems running KVM and managed by Openstack" +date: "2015-01-16" +categories: + - "blogs" +--- + +Hadoop workloads are memory and compute intensive and Power servers are best choice for hadoop workloads.  The Power servers are first processor designed to accelerate big data workloads. + +We implemented PowerKVM based Hadoop cluster solution on Power Systems and validated performance of teradata workload on PowerKVM virtual machines, to ensure consolidation of Hadoop workloads on  PowerKVM. This paper covers how capabilities of Open Power & Openstack simplify deployment of Hadoop Solution on Power Virtual machines. Also would like to share VM & Hadoop cluster configuration which yields better performance + +This presentation talks about "One-click hadoop cluster deployment on OpenPower systems running KVM and managed by Openstack" + +Pradeep K Surisetty  普拉迪普库马 Linux KVM (PowerKVM & zKVM) Test Lead Linux Technology Centre, Bangalore + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Surisetty-Pradeep_OPFS2015_IBM_031115_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/open-cognitive-environment-open-ce-a-valuable-tool-for-ai-researchers.md b/content/blog/open-cognitive-environment-open-ce-a-valuable-tool-for-ai-researchers.md new file mode 100644 index 0000000..ffc9ced --- /dev/null +++ b/content/blog/open-cognitive-environment-open-ce-a-valuable-tool-for-ai-researchers.md @@ -0,0 +1,37 @@ +--- +title: "Open Cognitive Environment (Open-CE) - A valuable tool for AI researchers" +date: "2021-04-13" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "openpower-foundation" + - "artificial-intelligence" + - "ai" + - "oregon-state-university" + - "open-ce" + - "open-cognitive-environment" +--- + +_[Christopher Sullivan](https://www.linkedin.com/in/christopher-m-sullivan-446904/),  Assistant director for biocomputing, Oregon State University - Center for Genome Research and Biocomputing_ + +The world has been changed by the usage of artificial intelligence (AI) to quickly understand data as it relates to our environments. These AI workloads are enabled by machine learning (ML) and deep learning (DL) frameworks allowing insight into how information can be learned by computers to find solutions and answer questions. + +Many times, these ML/DL frameworks, such as TensorFlow can be very difficult to install with GPU capability for individual users and system administrators. Because technologies such as general purpose GPU (GPGPU) require these frameworks to be compiled correctly to enable the hardware not having easy access to pre-compiled versions limits the use of these technologies. Computational researchers are constantly fighting the need to use new tools and enable them versus using the tools to answer scientific questions. + +Most researchers would rather spend time answering questions over installing or updating software packages that enable the hardware they use. We found that if research groups are enabled to manage these tools with little impact to their time or effort they are more likely to use them. + +Recently, Open Cognitive Environment (Open-CE) was established as a new community-driven set of ML/DL tools, enabling some of the best hardware with low-activation energy for users. Open-CE was built using IBM Watson Machine Learning Community Edition (WML CE) and OpenPOWER. This new version has a similar set of tools but can now be controlled and managed by the community that uses and needs the resource. It might include developers who want full control of all the versions of the tools and users who want just compiled binaries. + +The Open-CE community is working to provide both. The main Open-CE GitHub page ([https://github.com/open-ce](https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_open-2Dce&d=DwQGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=KY5tQXPrNPSAVrQ7uWWl45bQdYBqAOAEJsYQUaNVkFE&m=tFfM7Mfny39y2iyNqL7UXhvIof2Toii0S3FssQF4l8A&s=XvTyRMwUZFVMtqNNuXgtdL3jPEPrcSY4hA-rBkmmIZc&e=)) focusses on providing feedstock to developers, and groups, such as Open Source Lab (OSL). The Center for Genome Research and Biocomputing (CGRB) at the Oregon State University  provides precompiled Conda packages ([https://osuosl.org/services/powerdev/opence](https://urldefense.proofpoint.com/v2/url?u=https-3A__osuosl.org_services_powerdev_opence&d=DwQGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=KY5tQXPrNPSAVrQ7uWWl45bQdYBqAOAEJsYQUaNVkFE&m=tFfM7Mfny39y2iyNqL7UXhvIof2Toii0S3FssQF4l8A&s=SMOHfQZhjTEiZ-9NZoOr6n5I4zI0hF3SpOQB6n-_A-o&e=)). + +Open-CE is valuable to researchers because it provides the latest and greatest AI package and framework versions pre-integrated in an easy-to-consume and use Conda environment. Cutting edge research requires cutting edge tools and has recently been proven by a recent paper publication around passive monitoring of animals in the forest titled, "_Workflow and convolutional neural network for automated identification of animal sounds_" ([https://doi.org/10.1016/j.ecolind.2021.107419](https://urldefense.proofpoint.com/v2/url?u=https-3A__doi.org_10.1016_j.ecolind.2021.107419&d=DwMGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=KY5tQXPrNPSAVrQ7uWWl45bQdYBqAOAEJsYQUaNVkFE&m=tFfM7Mfny39y2iyNqL7UXhvIof2Toii0S3FssQF4l8A&s=cTto2KQzjTh6MFEqXbxgMTcwF4XR2anJEw_Onb5ULts&e=)). + +Lesmeister lab in the US Forest Service along with the Levi Lab and CGRB at Oregon State University have found using the OpenPOWER hardware for both segmentation and classification of data increased the throughput allowing them to change the scope of work. Initially, this work was focused on passive monitoring of the norther spotted owl population. However, with increased throughput, we were now able to start looking through the same data to monitor more species. + +Figure 1 (which is a screen capture from the paper) shows the ability to monitor different species. The important part of this forward progress within this research is the ability of the computational scientist processing the data having control over the tools (including the optimized ML/DL workflows) needed to accomplish this work. + +Open-CE has enabled this capability and reduced management of technologies, while providing cutting edge tools for research like this. We plan to start looking at how to bring this processing closer to the edge like phones and portable devices where the data is collected. Having Open-CE to be accessible on these edge devices can help maintain continuity and increase usage. + +![](images/Picture1.png) diff --git a/content/blog/open-compute-summit-barreleye.md b/content/blog/open-compute-summit-barreleye.md new file mode 100644 index 0000000..b7ce8ae --- /dev/null +++ b/content/blog/open-compute-summit-barreleye.md @@ -0,0 +1,34 @@ +--- +title: "OpenPOWER Members at Open Compute Summit Detail Their Barreleye Plans" +date: "2016-03-09" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Sam Ponedal, Social Strategist[![barreleye fish](images/barreleye-fish.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/10/barreleye-fish.jpg)_ + +Last year at Open Compute Summit,OpenPOWER member Rackspace stole the show when they announced their plans to develop [Barreleye, their new mega-server built with open standards across the board](https://openpowerfoundation.org/blogs/openpower-open-compute-rackspace-barreleye/). In the year since, OpenPOWER members have jumped on the Barreleye bandwagon, and it’s easy to see why when Barreleye was described by Rackspace’s Aaron Sullivan as having “the capacity for phenomenal virtual machine, container, and bare metal compute services.” + +At Open Compute Summit 2016, we caught up with three of our members, Mark III Systems, StackVelocity, and Penguin Computing to learn what they have planned for Barreleye. Here’s what they said: + +## [Mark III Systems](http://www.markiiisys.com/) + +_Andy Lin, Vice President, Strategy[![Mark III](images/Mark-III.png)](https://openpowerfoundation.org/wp-content/uploads/2016/03/Mark-III.png)_ + +Mark III Systems is currently working with our OpenPOWER community ecosystem partners to develop and bring to market Barreleye systems, enabled by OpenPOWER technologies, within an Open Compute Project architectural design.  It’s our current plan to work toward enabling enterprises and service providers across North America to acquire this compelling, cloud-centric platform through Mark III and for our team of engineers to provide value-added expertise and services to empower our clients to be successful at all stages of the system’s product lifecycle, when needed.  As a long-time IBM Premier Business Partner with strong architectural and execution expertise around both POWER-based systems and cloud tech stacks, we’re very excited at the unique value proposition that Barreleye presents for not only our hyperscale, analytics, and cloud-focused clients, but also the potential possibilities for all enterprises as Barreleye adoption grows in lockstep with the OpenPOWER community. + +## [Penguin Computing](http://www.penguincomputing.com/products/rackmount-servers/openpower-servers/) + +_Jussi Kukkonen, Director of Product Management[![penguin](images/penguin.png)](http://www.penguincomputing.com/products/rackmount-servers/openpower-servers/)_ + +Since its inception in 1998, Penguin Computing's focus has been providing customers choice and flexibility by enabling open source systems in the data center. This pursuit of platform choices led Penguin to join the Open Compute Project in 2013 and OpenPOWER Foundation in 2015. Our introduction of the [Penguin Magna 1015 system](http://www.penguincomputing.com/products/rackmount-servers/openpower-servers/) combines the OpenPOWER processor platform with the Open Compute Project physical form factor. This architecture is consistent with Penguin's continuing commitment and investment in Open Compute Project and emphasis on customer-driven choice. Penguin's well established Linux practice, solution delivery and support capabilities are now available to customers evaluating and deploying OpenPOWER solutions. + +## [StackVelocity](http://www.markiiisys.com/blog/2016/03/14/mark-iii-openpower-open-compute-project-premise-barreleye/) + +_By Ray Salgado, Business Unit Director_ + +[![StackV_reg](images/StackV_reg.jpg)](http://www.markiiisys.com/blog/2016/03/14/mark-iii-openpower-open-compute-project-premise-barreleye/) + +StackVelocity has the capability to integrate, configure, test and deploy quality [Barreleye systems](http://www.markiiisys.com/blog/2016/03/14/mark-iii-openpower-open-compute-project-premise-barreleye/) as part of our Cloud Services. We target large scale data center deployments with our services, and using our vast network of distribution sites, our services are available on a global basis. Our engineering teams can architect customized solutions that incorporate Barreleye into any configuration from the simple to the most complex; just give us the specification and we can deliver your solution.  Our manufacturing capacity is significant, in fact we can build 1000's of systems on a tight timeline and ship anywhere in the world. We can have any international certifications tested and approved as part of the statement of work. diff --git a/content/blog/open-source-innovation-ibm-techu-rome.md b/content/blog/open-source-innovation-ibm-techu-rome.md new file mode 100644 index 0000000..ce348e1 --- /dev/null +++ b/content/blog/open-source-innovation-ibm-techu-rome.md @@ -0,0 +1,58 @@ +--- +title: "OpenPOWER and Open Source Innovation Shine at IBM TechU Rome" +date: "2018-11-07" +categories: + - "blogs" +tags: + - "featured" +--- + +By Florin Manaila, Senior IT Architect and Inventor, IBM Systems Hardware Europe + +[![IBM TechU Rome](images/TechU-Rome-1024x768.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/TechU-Rome.jpg) + +The IBM Technical University in Rome was the perfect place to gather a large audience from around the globe on topics including AIX, Cognitive, Linux on Power, Power Systems Storage and SDI. + +TechU Rome 2018 event offered unique initiatives that attracted many technical persons during the week, including a PowerAI Meetup that discussed AI deep learning, a PowerAI Discussion Group on OpenPOWER hardware as well as a PowerAI Lounge where presentations on IBM OpenPower Systems Interact with DEMO's and other IBM and BP experts.  + +TechU Rome 2018 was a great opportunity to watch an IBM BluePrint for Deep Learning in Edge device and how PowerAI models can be successfully used on small, accelerated devices by embedding GPUs such as NVIDIA Jetson TX2.. + +Across all sessions was a pattern: the value of the open platform and innovation. IBM OpenPower and Open Source innovation was presented on many topics, summaries of the sessions can be found below: + +**OpenCAPI: Next generation of acceleration for the cognitive area** + +By Myron Slota + +Open Coherent Accelerator Processor Interface (OpenCAPI) is a new industry standard device interface. OpenCAPI enables the development of host-agnostic devices which can coherently connect to any host platform which supports the OpenCAPI standard, providing the device with the capability to coherently cache host memory to facilitate accelerator execution. This session describes where we think acceleration is going in the industry, the disruptive technologies in the acceleration space, and where IBM Power Systems will be participating in acceleration. + +**IBM AC922 Deep Learning System** + +By Florin Manila + +GPU Deep Learning is the foundation for the fourth industrial revolution driven by AI. During this session will be presented the IBM AC922 architecture and design for deep learning workloads. As a fundamental block for Distributed Deep Learning Architecture various sizing aspects will be presented together with related PowerAI software implications. + +**Open Source and Power Systems** + +By Andrew Laidlaw + +A look at a selection of the Open Source Communities and Projects that are relevant to Power Systems, on both the hardware and software side of innovation. As well as the obvious examples of the OpenPOWER Foundation and OpenCAPI Consortium, we will look at how Open Source is influencing the software landscape, and the Power Systems strategy. This will include looking at the Deep Learning space, both SQL and NoSQL databases, and deployment technologies like OpenStack and Kubernetes. + +**Linux On Power: Trends and Directions** + +By Steven Roberson + +This session will present the roadmap for the various Linux distributions on Power, including OS, OpenStack and Container strategy. We will also discuss the capabilities of the Power 9 processor and the architecture that the distributions exploit. In addition, we will cover the hardware roadmap of all Power servers built for Linux workloads. + +**FPGA with CAPI : an alternative to accelerate your apps** + +By Alexandre Castellane + +This lecture will help you understand what FPGA hardware acceleration provides and when it can be used in complement or replacement of GPU. SNAP framework provides software engineers with a mean to use this technology in a snap ! The unique advantages of POWER technology including CAPI / OpenCAPI technology coupled to SNAP framework will be presented. Details of what memory coherency and low latency associated to FPGA bring to you will be explored through very simple examples. + +**Architecting the Future: Innovating with Data, AI and Cloud** + +By Thomas Harrer + +The Architecting the Future track has been representing a consistent IBM track on how the technology can change our life’s and industries such healthcare, automotive by adopting AI. An efforts show business value behind the technology, for architecture, client use cases, advantages, and why IBM is the best partner to innovate with. + +All the presentations are [available here](https://ibmtechu.com/cgi-bin/cms/aptresults.cgi?myevent=Rome2018&doit=Search&search=.). diff --git a/content/blog/open-source-summit-north-america-linux-foundation.md b/content/blog/open-source-summit-north-america-linux-foundation.md new file mode 100644 index 0000000..02bffe6 --- /dev/null +++ b/content/blog/open-source-summit-north-america-linux-foundation.md @@ -0,0 +1,15 @@ +--- +title: "Open Source Summit North America – Linux Foundation" +date: "2018-09-26" +categories: + - "events" + - "blogs" +--- + +\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern"\]\[vc\_column\]\[vc\_column\_text\] + +## The technical conference for professional open source. + +Open Source Summit North America is the leading conference for developers, architects and other technologists – as well as open source community and industry leaders – to collaborate, share information, learn about the latest technologies and gain a competitive advantage by using innovative open solutions. Over 2,000 will gather for OSSNA in 2018. + +\[/vc\_column\_text\]\[vc\_empty\_space height="20px"\]\[button target="\_blank" hover\_type="default" text="Learn More" link="https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/"\]\[vc\_empty\_space\]\[/vc\_column\]\[/vc\_row\] diff --git a/content/blog/opening-up-in-new-ways-how-the-openpower-foundation-is-taking-open-to-new-places.md b/content/blog/opening-up-in-new-ways-how-the-openpower-foundation-is-taking-open-to-new-places.md new file mode 100644 index 0000000..cbd82cc --- /dev/null +++ b/content/blog/opening-up-in-new-ways-how-the-openpower-foundation-is-taking-open-to-new-places.md @@ -0,0 +1,20 @@ +--- +title: "Opening Up in New Ways: How the OpenPOWER Foundation is Taking Open to New Places" +date: "2014-08-25" +categories: + - "blogs" +--- + +By Jim Zemlin - August 25, 2014 + +It’s no secret that open development is the key to rapid and continuous technology innovation. Openly sharing knowledge, skills and technical building blocks is something that we in the Linux community have long been promoting and have recognized as a successful model for breeding technology breakthroughs. Much of The Linux Foundation’s and its peerss efforts to date have been centered on fostering openness at the software level, starting right at the source -- the operating system – and building up from there. Traditionally, the agenda has not included a great amount of attention on how to open up at the hardware level. Until now. + +A year ago, many of us in the Linux community took notice when IBM, NVIDIA, Mellanox, Tyan and Google announced their intentions to form the [OpenPOWER Foundation](https://openpowerfoundation.org/), a group through which the IBM POWER processor architecture would be opened up for development. Now, one year later, the group has officially formed and the notion of open hardware development that starts at the processor level has resonated with many. + +According to OpenPOWER, they now have 53 members and seven working groups focused on enabling broad industry innovation across the full hardware and software stack. Through the Foundation, member companies are free to use the POWER architecture for custom open servers and components for Linux based cloud data centers, or any processor application they choose. + +Fostering open collaboration at all levels – from the chip and on up through the entire hardware and software stacks – is what is needed to drive a new era of innovation. To this end, The Linux Foundation looks forward to partnering with the OpenPOWER Foundation in the near future on projects in which we have a shared vision. In particular, we will aim to work together in ways that can address some of today’s largest technology challenges – like better harnessing Big Data, addressing security concerns and energy efficiency – in a way that unlocks opportunity for all. + +So, with that, let me officially welcome the OpenPOWER Foundation to the community. We look forward to working together to drive open innovation in new ways and in new places. + +[Source: Linux Foundation](http://www.linuxfoundation.org/news-media/blogs/browse/2014/08/opening-new-ways-how-openpower-foundation-taking-open-new-places) diff --git a/content/blog/openpower-academic-discussion-group-workshop-promotes-collaboration.md b/content/blog/openpower-academic-discussion-group-workshop-promotes-collaboration.md new file mode 100644 index 0000000..1b2579e --- /dev/null +++ b/content/blog/openpower-academic-discussion-group-workshop-promotes-collaboration.md @@ -0,0 +1,91 @@ +--- +title: "OpenPOWER Academic Discussion Group Workshop Promotes Collaboration" +date: "2021-01-20" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "openpower-foundation" + - "academic-discussion-group-workshop" + - "university-of-oregon" + - "e4-computer-engineering" +--- + +The OpenPOWER Foundation’s Academic Discussion Group hosted its 5th annual workshop on November 6, 2020. + +The annual event is intended to facilitate interaction and engagement between Academic institutions, with a focus on supporting developers from different application areas of scientific computing, data analytics and artificial intelligence. + +The 2020 workshop was hosted by Dirk Pleiter of the Jülich Supercomputing Centre, Ganesan Narayanasamy of IBM, Sameer Shende of the University of Oregon and Fabrizio Magugliani of E4 Computer Engineering. + +A summary of the content shared at the event is included below. + +**Extreme-scale Scientific Software Stack** + +- The DOE Exascale Computing Project Software Technology focus area is developing an HPC software ecosystem that will enable the efficient and performant execution of exascale applications. +- Professor Sameer Shende, University of Oregon +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/0/contribution/2/material/slides/) + +**Open OnDemand platform for POWER systems** + +- IBM Power plus PowerAI systems are arguably the most advanced and highly architected systems for machine learning / deep learning on the market. Here, we introduce Open OnDemand as a platform to enable to new users on Power-based HPC clusters. +- Professor Robert Settlage, Virginia Tech +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/0/contribution/3/material/slides/0.pdf) + +**LBM performance in Exascale era** + +- In the next two years, the first Exascale class supercomputers will be delivered. In this talk, starting from the results obtained using Marconi100, we try to extrapolate a reasonable performance scenario that a Lattice Boltzmann Method (LBM) based code can achieve using these high-end HPC machines. +- Giorgio Amati, CINECA +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/0/contribution/5/material/slides/) + +**Parallel Comparison of Huge DNA Sequences in Multiple GPUs with Pruning** + +- Sequence comparison is a task performed in several Bioinformatics applications daily all over the world. This talk presents a variant of the block pruning approach that runs in multiple GPUs, in homogeneous or heterogeneous environments. +- Professor Alba Cristina Magalhaes Alves de Melo, University of Brasilia and Mr. Marco Figueiredo, University of Brasilia +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/0/contribution/10/material/slides/) + +**Middleware for Message Passing Interface (MPI) and Deep Learning on OpenPOWER platforms** + +- This talk focuses on high-performance and scalable middleware for Message Passing Interface (MPI) and Deep Learning on OpenPOWER platforms with NVIDIA GPGPUs and RDMA-enabled interconnects (InfiniBand and RoCE). +- Professor DK Panda, Ohio State University +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/1/contribution/1/material/slides/) + +**Parallelware Analyzer: Data race detection for GPUs using OpenMP** + +- This talk presents a new innovative approach to parallel programming based on two pillars: first, an open catalog of rules and recommendations that leverage parallel programming best practices; and second, the automation of quality assurance for parallel programming through new static code analysis tools specializing in parallelism that integrate seamlessly into professional software development tools. +- Manuel Arenaz, Appentra and University of A Coruña +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/1/contribution/4/material/slides/) + +**The Marconi100 high-frequency power monitoring system** + +- The availability of high-resolution, real-time power measurements of high-performance computing systems (HPC) opens the door to new applications, especially in the fields of security, malware detection and anomaly detection. This presentation describes the initial work done on the Marconi100 system where, using the OpenBMC framework, it is possible to obtain high resolution power measurements of a server without the need for additional hardware. +- Francesco Beneventi, University of Bologna +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/1/contribution/9/material/slides/) + +**Exploring the Power of Containerization to Improve Data - and HPC-Education** + +- Hands-on learning over big data sets is complicated by several factors including data movement, bandwidth consumption. To understand the analytics at scale, students need a uniform learning environment to fulfill their learning needs on top of multiple different infrastructures starting from their personal laptop to different types of high-performance computing cluster. This presentation demonstrates how ‘Onstitute’ addresses this issue utilizing the power of containerization on top of cutting edge HPC-technologies such as, POWER-based hardware. +- Professor Arghya Das, University of Wisconsin Platteville + +**Memory is everywhere and… is often the bottleneck** + +- The POWER9 processor brought the OpenCapi interface which provides a huge bandwidth and very low latency to systems, but also keeps the data coherency. This standard is used by hardware accelerators to off-load applications bringing dedicated optimized processing to servers, but also to manage all new technology host memory types. This presentation includes real use cases to show its amazing capabilities through huge data acquisition chain used in synchrotron. +- Alexandre Castellane, IBM France and Mr. Bruno Mesnet, IBM France +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/2/contribution/7/material/slides/) + +**FPGA acceleration of Spark SQL queries using Apache Arrow and OpenCAPI** + +- Apache Spark is one of the most widely used big data analytics frameworks due to its user-friendly API. However, the high level of abstraction in Spark introduces large overheads to access modern high-performance heterogeneous hardware accelerators such as FPGAs. This talk discusses solutions to accelerate Spark SQL queries using FPGAs and to offload these computations transparently with little user configuration. +- Akos Hadnagy, Delft University of Technology +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/2/contribution/8/material/slides/) + +**Power 10 features** + +- POWER10 is IBM's next generation POWER micro-processor that includes superior attributes for enterprise, cognitive and high-performance computing. This talk will describe many of the innovations and capabilities of POWER10 that provide a strong foundation for high-performance computing workloads. +- Brain Thompto, IBM and Mr. Bill Starke, IBM +- [View the presentation here](https://indico-jsc.fz-juelich.de/event/156/session/2/contribution/11/material/slides/) + +**Disaggregated memory technologies and future opportunities in the HPC world** + +- Traditional server trays encapsulate memory, computational units and accelerators creating physical constraints that limit clusters design flexibility. But, for what concerns memory, this is not the only option anymore. This talk explores how disaggregated memory technologies bring future opportunities in HPC scenarios. +- Michele Gazzetti, IBM Research Europe diff --git a/content/blog/openpower-academic-group-carries-2016-momentum-new-year.md b/content/blog/openpower-academic-group-carries-2016-momentum-new-year.md new file mode 100644 index 0000000..171805d --- /dev/null +++ b/content/blog/openpower-academic-group-carries-2016-momentum-new-year.md @@ -0,0 +1,11 @@ +--- +title: "OpenPOWER Academic Group Carries 2016 Momentum to New Year" +date: "2017-01-24" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + + diff --git a/content/blog/openpower-academic-group-carries-2016-momentum-to-new-year.md b/content/blog/openpower-academic-group-carries-2016-momentum-to-new-year.md new file mode 100644 index 0000000..6aa1efa --- /dev/null +++ b/content/blog/openpower-academic-group-carries-2016-momentum-to-new-year.md @@ -0,0 +1,31 @@ +--- +title: "OpenPOWER Academic Group Carries 2016 Momentum to New Year" +date: "2017-01-11" +categories: + - "blogs" +--- + +_By Ganesan Narayanasamy, Leader, OpenPOWER Academic Discussion Group_ + +Academia has always been a leader in pushing the boundaries of science and technology, with some of the most brilliant minds in the world focused on how they can improve the tools at their disposal to solve some of the world’s most pressing challenges. That’s why, as the Leader of the OpenPOWER Academic Discussion Group, I believe working with academics in university and research centers to develop and adopt OpenPOWER technology is key to growing the ecosystem. The Academia Group is enabling several academicians to do research and development activities using Power CPUs and systems and this creates very strong ecosystem growth for OpenPOWER-based systems. + +2016 was an amazing year for us, as we helped launch new partnerships at academic institutions like in [A\*CRC in Singapore](https://openpowerfoundation.org/blogs/acrc-openpower/), [IIT Bombay in India](https://openpowerfoundation.org/blogs/openpower-research-facility-iit-bombay/), and more. We also assisted them in hosting [OpenPOWER workshops](https://openpowerfoundation.org/blogs/recap-cdac-three-day-workshop-on-openpower-for-hpc-and-big-data-analytics/) where participants learned how OpenPOWER’s collaborative ecosystem is leading the way on a multitude of research areas. Armed with this knowledge, our members helped to spread the OpenPOWER gospel. Most recently, our members were at GTC India 2016 and SC16 to meet with fellow technology leaders and discuss the latest advances around OpenPOWER. + +After joining the OpenPOWER Foundation as an academic member in October 2016, the [Universidad Nacional de Córdoba](http://www.unc.edu.ar/) in Argentina sent professors Carlos Bederián and Nicolás Wolovick to SC16 in Salt Lake City to learn more about OpenPOWER. + +> “The SC16 exhibition was a showcase of OpenPOWER systems, where the [IBM S822LC for HPC](https://blogs.nvidia.com/blog/2016/09/08/ibm-servers-nvlink/) was a remarkable piece of hardware to get to know firsthand. SC16 was also the ideal environment to discuss the balanced and powerful OpenPOWER architecture with qualified technical leaders from Penguin Computing, IBM, and others,” Wolovick explained. “Knowing the people, the hardware, and learning more about the forthcoming access to IBM S822LC for HPC are just a few of the reasons for Universidad Nacional de Córdoba’s active presence in the OpenPOWER Foundation.” + +In Asia, representatives from OpenPOWER and Academic Discussion Group member IIT Bombay led discussions at NVIDIA’s GTC India to advance OpenPOWER. Their session, “Getting Started with GPU Computing”, was presented by IIT’s Professor Nataraj Paluri, who discussed the multiple advantages of OpenPOWER for accelerated computing by delivering ecosystem-driven innovation. + +As a result of the Academic Discussion Group’s leadership, we were honored to receive [HPCWire’s Reader’s Choice Award for Best HPC Collboartion Between Academia and Industry](https://www.hpcwire.com/off-the-wire/hpcwire-reveals-winners-2016-readers-editors-choice-awards-sc16-conference-salt-lake-city/) at SC16. Such awards only reaffirm OpenPOWER’s commitment to moving towards world-class systems — both offered by IBM and those built by our OpenPOWER partners that leverage POWER’s open architecture. SASTRA University’s Dr. V.S. Shankar Sriram joined us in receiving the award, and he expounded the benefits of joining OpenPOWER. + +> “Through the OpenPOWER foundation, we are focused in the projects related to human cognition and deep learning techniques for various life science applications. We have already ported applications like GROMACS onto the Power architecture. We are excited to be part of OpenPOWER, which helps our professors and researchers work as a team with shared objectives, and motivates us to achieve ambitious goals that have relevant impact we can be proud of.” + +With such a successful 2016, we’re excited to carry the momentum into the new year! We’ve already got some great events planned, like: + +- CDAC National level Deep learning workshop, March 2017, Bangalore +- ADG and OpenPOWER user group meeting, May 8th thru 11th, San Jose +- OpenPOWER Workshop, June 22nd, Germany, More info: [https://easychair.org/conferences/?conf=iwoph17](https://easychair.org/conferences/?conf=iwoph17) +- ADG and OpenPOWER user group meeting, date TBD, Denver, USA + +Want to be even more involved with the OpenPOWER Academic Discussion Group? Then join OpenPOWER as an Academic member. Your membership entitles you to the latest news, event notifications, webcasts, discussions, and more. Learn more about membership and download the Membership Kit, here: [https://openpowerfoundation.org/membership/how-to-join/](https://openpowerfoundation.org/membership/how-to-join/). diff --git a/content/blog/openpower-accelerates-open-innovation-with-new-member-products-and-free-development-cloud.md b/content/blog/openpower-accelerates-open-innovation-with-new-member-products-and-free-development-cloud.md new file mode 100644 index 0000000..2b24c7c --- /dev/null +++ b/content/blog/openpower-accelerates-open-innovation-with-new-member-products-and-free-development-cloud.md @@ -0,0 +1,51 @@ +--- +title: "OpenPOWER Accelerates Open Innovation with New Member Products and Free Development Cloud" +date: "2015-06-10" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +**Global Tech Leaders Collaborating to Propel World’s Only Open Enterprise Server** + +**OpenPOWER China Summit, Beijing, China, June 10, 2015 –** The [OpenPOWER Foundation](http://www.openpowerfoundation.org/) today announced new solutions, expanded membership and free-of-charge cloud services for developers to advance open innovation in hyperscale data centers to better meet the growing demand for alternatives to commodity servers. + +Formed in December 2013 to facilitate open development of the POWER processor architecture, the OpenPOWER Foundation has grown to more than 130 members worldwide supporting and delivering a wide range of new POWER-based products. + +“As the desire for alternatives to x86-based servers increases around the world, OpenPOWER’s members are actively teaming together to drive design innovation and develop differentiated products,” said Brad McCredie, OpenPOWER Foundation President and IBM Fellow. “Our rapidly expanding OpenPOWER ecosystem is delivering much-needed choice and price-performance advantaged solutions for the technology industry.” + +Member product announcements include: + +- **New CAPI acceleration development kit from Alpha Data** – Leveraging Coherent Accelerator Processor Interface (CAPI), a unique feature built into the POWER architecture, Alpha Data today [announced](http://www.pr.com/press-release/623394) the availability of a [CAPI acceleration development kit](http://www.alpha-data.com/dcp/capi.php) co-developed with fellow OpenPOWER member Xilinx. The kit enables system designers and programmers leveraging hardware acceleration to utilize Xilinx All Programmable FPGA devices attached to CAPI on IBM® POWER8™ systems and OpenPOWER member-branded POWER8™ systems. + +- **General availability of CP1 processor** – Suzhou PowerCore Technology Co., Ltd. announced general availability of CP1, the first POWER chip for the China market which was [first introduced](https://openpowerfoundation.org/press-releases/openpower-foundation-technology-leaders-unveil-hardware-solutions-to-deliver-new-server-alternatives/) at the inaugural OpenPOWER Summit in March. Customers may begin placing orders today. Zoom Netcom’s RedPOWER servers will be the first to incorporate CP1 into its design. + +- **General Availability of Tyan OpenPOWER Systems** – Tyan’s previously sold out [TYAN GN70-BP010](http://www.tyan.com/campaign/openpower/index.html) OpenPOWER customer reference systems is available for immediate shipment at a price of $2850 USD. Additionally, Tyan is now taking orders for the [TYAN TN71-BP012](http://www.tyan.com/solutions/tyan_openpower_system.html), Tyan’s OpenPOWER server designed for large-scale cloud deployments. The estimated ship date is August. + +**“SuperVessel” OpenPOWER Development Cloud** To further support open research and development on top of the POWER architecture, IBM today [announced](https://ibm.biz/BdXSaf) SuperVessel, a first-of-its-kind initiative that enables business partners, application developers and university students to conduct innovation, development and learning for the growing OpenPOWER ecosystem. + +[SuperVessel](https://ptopenlab.com/cloudlabconsole/index.html), an open access cloud service created by Beijing’s IBM Research and IBM Systems Labs, is now available to the global community of developers who want to participate in the OpenPOWER ecosystem. The cloud acts as a virtual R&D engine for the creation, testing and pilot of emerging applications including deep analytics, machine learning and the Internet of Things. + +SuperVessel is based on POWER processors, incorporates GPUs and Xilinx FPGAs to provide heterogeneous acceleration, and uses OpenStack to manage the whole cloud. + +To date, SuperVessel has attracted thousands of users in the past six months, including developers from the open source community and students from more than 30 universities in China and around the world. OpenPOWER ecosystem partners can leverage SuperVessel to speed up their application development. + +**OpenPOWER Design for Open Data Center Committee** Last week IBM announced it joined China’s [Open Data Center Committee](http://www.opendatacenter.cn/) (ODCC), a consortium of leading Chinese technology providers creating new server specifications for hyperscale data centers. IBM has begun working closely with several fellow OpenPOWER members to develop an ODCC compliant system based on the OpenPOWER design concept which leverages the POWER architecture. + +OpenPOWER members intend to combine their extensive system design experience and differentiated technologies to develop powerful, new, cost efficient servers for some of the world’s largest data centers. + +**Continued Membership Growth** The membership base of the OpenPOWER Foundation has grown rapidly in roughly a year and a half. What started in December 2013 with five founders – Google, IBM, NVDIA, Mellanox and Tyan – is now a diverse community of more than 130 members representing leading technology companies, academic institutions and researchers located in 22 countries across six continents. Some of the most recent additions include NEC, Penguin Computing and the University of California. A list of current members is available at [https://openpowerfoundation.org/membership/current-members](https://openpowerfoundation.org/membership/current-members). + +**About the OpenPOWER Foundation** The OpenPOWER Foundation is a global, open development membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. OpenPOWER members share expertise, investment and server-class intellectual property to develop solutions that serve the evolving needs of technology customers. + +The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. + +For further details visit [www.openpowerfoundation.org](https://openpowerfoundation.org/). + +\# # # + +Media Contact + +**Abby Schoffman** Text100, for the OpenPOWER Foundation [abby.schoffman@text100.com](mailto:abby.schoffman@text100.com) 212.871.3928 diff --git a/content/blog/openpower-ai-workshop-barcelona.md b/content/blog/openpower-ai-workshop-barcelona.md new file mode 100644 index 0000000..2b3b70b --- /dev/null +++ b/content/blog/openpower-ai-workshop-barcelona.md @@ -0,0 +1,35 @@ +--- +title: "OpenPOWER and AI Workshop at BSC, Spain" +date: "2018-07-11" +categories: + - "blogs" +tags: + - "featured" +--- + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/) + +The OpenPOWER and AI workshop hosted by Barcelona Supercomputing Center in Barcelona, Spain was held on June 18th, 2018. + +Professor Mateo Valero Crotes, BSC Director, kicked off the program by discussing the importance of IBM and OpenPOWER collaborations to the researchers and executives in attendance. He also mentioned the use of Power9 supercomputers will increase in the future. + +Key takeaways from the workshop include: + +- researchers can get up and running in a few hours on the DL framework of choice +- all memory in the system is coherent and hence models are not limited to the memory capacity of GPU +- clustering is at 95% efficiency +- data-wrangling is greatly automated +- parameter tuning is monitored by the system +- This is all available in PowerAI platform, of which BSC is setting up 54 nodes + +Here, you’ll find five presentations shared at the workshop: + +Presentation 1: [Introduction and OpenPOWER Academia Discussion Group](https://www.slideshare.net/ganesannarayanasamy/ai-openpower-academia-discussion-group) + +Presentation 2: [Power9 and PowerAI Features](https://www.slideshare.net/ganesannarayanasamy/2018-bsc-power9-and-power-ai) + +Presentation 3: [Introduction to Snap Machine Learning](https://www.slideshare.net/ganesannarayanasamy/snap-machine-learning) + +Presentation 4: [GPU Acceleration in Computational Fluid Dynamics with OpenMP 4.5 and CUDA in OpenPOWER Platforms](https://www.slideshare.net/ganesannarayanasamy/cfd-on-power) + +Presentation 5: [Large Model Support and Distributed Deep Learning Labs](https://www.slideshare.net/ganesannarayanasamy/bsc-lms-ddl) diff --git a/content/blog/openpower-ai-workshop-iit-delhi.md b/content/blog/openpower-ai-workshop-iit-delhi.md new file mode 100644 index 0000000..38680a0 --- /dev/null +++ b/content/blog/openpower-ai-workshop-iit-delhi.md @@ -0,0 +1,22 @@ +--- +title: "OpenPOWER and AI Workshop at IIT Delhi Campus" +date: "2018-11-13" +categories: + - "blogs" +tags: + - "featured" +--- + +Josiah Samuel, advisory software engineer, IBM + +[![](images/IIT-Delhi-1024x499.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/IIT-Delhi.jpg) + +I recently attended the OpenPOWER and AI Workshop at the Indian Institute of Technology Delhi. This workshop gathered 30 students to learn about IBM and its work with Artificial Intelligence. + +I was able to offer these students hands-on sessions discussing PowerAI, SnapMl and Machine Learning. One portion of the workshop focused on walking through a problem statement. This workshop's statement was: "How to make a quick prediction whether a credit amount can be sanctioned or not." After explaining the assignment, the students were taught how to do explanatory data analysis using the Matplotlib library. These charts showed the co-relation between various attributes. Students were taught how to convert raw data into a format machine learning algorithms can understand. All along, the students were allowed to try on their own based on the provided Power8 setup. + +Other tools that were used to solve this problem included the Scikit-learn's logistic Regression API to train the model, using a small dataset which shows low accuracy. This allowed the students to view the metrics. Students learned that the more the dataset was increased, the more accurate the data became. + +Following this, we contrasted the Scikit-learn's to SnapML. SnapML can perform ML training on large datasets with at least a 10x decrease in training time compared to Scikit-learn's training time with no compromise on the model's Accuracy. + +It was an incredible experience to share my work with the IIT Delhi students and walk them through a real-life scenario. diff --git a/content/blog/openpower-ai-workshop-nitk.md b/content/blog/openpower-ai-workshop-nitk.md new file mode 100644 index 0000000..5ac0837 --- /dev/null +++ b/content/blog/openpower-ai-workshop-nitk.md @@ -0,0 +1,29 @@ +--- +title: "OpenPOWER and AI Workshop Continues Partnership Between IBM and NITK" +date: "2018-12-12" +categories: + - "blogs" +tags: + - "featured" +--- + +By Basavaraj Talawar, assistant professor, Computer Science and Engineering Department, National Institute of Technology Karnataka Surathkal + +[![](images/NITK-1024x758.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/12/NITK.png) + +We recently held a half-day session at the National Institute of Technology (NITK) on IBM’s deep learning initiatives. I was proud to be joined by Romeo Kienzler, chief data scientist, IBM Watson IoT and IBM Certified Senior Architect. + +We first reviewed the history and important milestones of the long-standing collaboration between IBM and NITK, including: + +- Dipankar Sarma, distinguished scientist, IBM, visit to NITK in November, 2014 +- Creating of a memorandum of understanding between both organizations in November, 2016 +- Establishment of the NITK-IBM Computer Systems Research Group in November, 2016 +- Start of the POWER on gem5 project in July, 2016 + +Our POWER on gem5 project is currently in its third year and is closer than ever to reaching its goal of executing a full-fledged Linux kernel on the POWER module in gem5. My colleague Kajol Jain, a student working on the project, shared a comprehensive summary on her work getting the serial console up on the POWER-gem5 module in gem5. Previous milestones accomplished in the project include support for the 64b integer POWER ISA 3.0, ABI v2 support, MMU support and Radix page support. + +Kienzler then shared some of the exciting breakthroughs in the AI world, both from the IBM point-of-view and in general. He began with the fundamental linear algebra required for the machine learning concepts that followed in this field. Then, he covered an introduction to tensors and regressions, convolution neural networks, back propagation and related concepts. + +Kienzler went on to share a glimpse of what AI will do for us in the future. An overview including automated self-aware automotive design, driverless vehicles, self-aware 3D printing and cognizant robots certainly piqued our interest! + +The audience, which consisted mostly of bright bachelor of technology students, were in-step and engaged with the concepts throughout the presentation. Thank you to Kienzler for sharing his insight and expertise with us, and IBM for their continued partnership with NITK. diff --git a/content/blog/openpower-announces-public-review-of-power-vector-intrinsics-programming-reference.md b/content/blog/openpower-announces-public-review-of-power-vector-intrinsics-programming-reference.md new file mode 100644 index 0000000..d6e50b1 --- /dev/null +++ b/content/blog/openpower-announces-public-review-of-power-vector-intrinsics-programming-reference.md @@ -0,0 +1,25 @@ +--- +title: "OpenPOWER announces public review of Power Vector Intrinsics Programming Reference" +date: "2020-04-03" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "power-vector-intrinsics-programming-reference" + - "pvipr" +--- + +_By_ [_Bill Schmidt_](https://www.linkedin.com/in/williamschmidtphd/)_, Ph.D., Toolchain Architect for Linux on Power, IBM_ + +The OpenPOWER Foundation System Software Work Group is happy to announce that the public review of the [Power Vector Intrinsics Programming Reference](https://openpowerfoundation.org/?resource_lib=power-vector-intrinsic-programming-reference-review-draft) (PVIPR) is now open! + +The new PVIPR document provides developers with resources for effectively programming Power Instruction Set Architecture’s vector instructions. This document provides background on the vector architecture and the application binary interface (ABI), describes Power’s unique bi-endian vector programming model, discusses best practices for vector programming, and contains a complete reference for each vector intrinsic function available for Power ISA 2.7 and 3.0. + +This information is also useful for compiler developers who want to be compliant with existing compilers for Power. To this end, PVIPR provides sample implementations of each vector intrinsic function. + +Historically, some of this information is also available in the [64b ELFv2 ABI Specification](https://openpowerfoundation.org/?resource_lib=64-bit-elf-v2-abi-specification-power-architecture). Since the vector intrinsics are not truly part of the ABI, we moved the information into a new, expanded document, and improved its utility as a quality reference document. Future versions of the ELFv2 ABI specification will remove the now-redundant information and point to this document instead. + +Comments and questions about this document can always be submitted to the public mailing list for this document at [syssw-programming-guides@mailinglist.openpowerfoundation.org](mailto:syssw-programming-guides@mailinglist.openpowerfoundation.org). Comments during this review period will help us complete version 1.0.0. + +The commenting period for the Power Vector Intrinsic Programming Reference closes on May 15, 2020. In order for this document to be as useful as possible to the OpenPOWER community, we need your input! Thank you in advance for your support. diff --git a/content/blog/openpower-announces-rethink-the-data-center-speaker-lineup-at-summit.md b/content/blog/openpower-announces-rethink-the-data-center-speaker-lineup-at-summit.md new file mode 100644 index 0000000..eb12470 --- /dev/null +++ b/content/blog/openpower-announces-rethink-the-data-center-speaker-lineup-at-summit.md @@ -0,0 +1,45 @@ +--- +title: "OpenPOWER Announces “Rethink the Data Center” Speaker Lineup at Summit" +date: "2015-02-19" +categories: + - "press-releases" + - "blogs" +--- + +_OpenPOWER Summit to Feature Over 35 Member Presentations,_ _Exhibitor Pavillion, ISV Roundtables and Firmware Training Sessions_ + +  + +**SAN JOSE, Calif., Feb 19, 2015 –** Today, the OpenPOWER Foundation announced a solid lineup of speakers headlining its inaugural [OpenPOWER Summit](https://openpowerfoundation.org/2015-summit/) at NVIDIA’s [GPU Technology Conference](http://www.gputechconf.com/) at the San Jose Convention Center, March 17-20.  Drawing from the open development organization’s [more than 100 members](https://openpowerfoundation.org/blogs/openpower-breaks-through-100-a-signal-of-even-more-innovation-to-come/) worldwide, the Summit’s organizers have lined up over 35 member presentations tied to the event’s “Rethink the Data Center” theme. + +Attendees will have the opportunity to hear from OpenPOWER leadership on Wednesday, March 18 through a series of keynotes that include: + +- OpenPOWER Chairman Gordon MacKean’s opening keynote “Advancing the OpenPOWER Vision” +- OpenPOWER President Brad McCredie’s presentation “The Disruptive Technology of OpenPOWER” +- OpenPOWER Technical Steering Committee Chair Jeff Brown’s detailing of the progress of OpenPOWER’s 8 technical work groups and other technical initiatives underway + +Several other OpenPOWER members will take the Summit’s main stage on March 18 to reveal a wide range of advancements including OpenPOWER technology, applications, open hardware and software developments. Confirmed presenters include Algo-Logic Systems, Altera, Canonical, DataDirect Networks, IBM, Jülich Supercomputing Centre, Mellanox, Nallatech, NVIDIA, Oak Ridge National Laboratory, PGI, PMC Sierra, Rackspace, Rice University, Suzhou PowerCore, Teamsun, Tyan and Xilinx. + +Additional member presentations will take place in the presentation theater of the OpenPOWER Pavilion on March 17-18.  A full list of presenters and abstracts can be found at [https://openpowerfoundation.org/2015-summit/](https://openpowerfoundation.org/2015-summit/) + +**About the OpenPOWER Summit** + +The three-day event will kick off the morning of Tuesday, March 17 with an exhibitor pavilion where OpenPOWER members will display and demonstrate OpenPOWER-based products and projects.  The pavilion will remain open through March 19. Following Wednesday’s full day of speaker presentations, Thursday’s schedule includes morning and afternoon ISV Roundtables hosted by Canonical and two OpenPOWER Firmware Training Labs. + +The Summit will host sector leaders, open technology champions, OpenPOWER members, industry press and analysts, and a diverse and growing ecosystem building momentum to accelerate new technology and foster cutting-edge environments. + +According to OpenPOWER Vice President and Summit organizer Michael Diamond of NVIDIA, “The Summit is the place to be to learn about what is going on with OpenPOWER and get involved.” + +To register, visit [https://openpowerfoundation.org/2015-summit/](https://openpowerfoundation.org/2015-summit/). To stay up to date on the OpenPOWER Summit, connect with the OpenPOWER Foundation on [LinkedIn](https://www.linkedin.com/groups/OpenPOWER-Foundation-7460635), [Facebook](https://www.facebook.com/openpower), [Google+](https://plus.google.com/117658335406766324024/posts), or [Twitter](https://twitter.com/openpowerorg) and follow the event hashtag #OpenPOWERSummit. + +**About OpenPOWER Foundation** + +The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the Foundation is to create an open ecosystem, using the POWER architecture to share expertise, investment, and server class intellectual property to serve the evolving needs of customers and industry. + +- OpenPOWER enables collaborative innovation for shared building blocks +- OpenPOWER supports independent innovation by members +- OpenPOWER builds on industry leading technology +- OpenPOWER thrives as an open development community +- Founded December 2013 by Google, NVIDIA, Tyan, Mellanox and IBM, the organization has grown to more than 100 members worldwide from all sectors of the data center ecosystem at large. For more information, visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org) + +**Media Contact:** Kristin Bryson OpenPOWER Media Relations Office: [914-766-4221](tel:914-766-4221) Cell: [203-241-9190](tel:203-241-9190) Email: [kabryson@us.ibm.com](mailto:kabryson@us.ibm.com) diff --git a/content/blog/openpower-at-the-international-conference-on-supercomputing.md b/content/blog/openpower-at-the-international-conference-on-supercomputing.md new file mode 100644 index 0000000..8ad0086 --- /dev/null +++ b/content/blog/openpower-at-the-international-conference-on-supercomputing.md @@ -0,0 +1,54 @@ +--- +title: "OpenPOWER at the International Conference on Supercomputing" +date: "2020-07-30" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "power-isa" + - "microwatt" + - "international-conference-on-supercomputing" +--- + +Earlier this month, OpenPOWER participated in the [International Conference on Supercomputing](https://ics2020.bsc.es/RVandOpenPOWER), co-hosted by the Universitat Politècnica de Catalunya (UPC-BarcelonaTECH) and the Barcelona Supercomputing Center (BSC-CNS). The conference showcased the latest research in high-performance computing systems. + +Included in the agenda was a dedicated Workshop on RISC-V and OpenPOWER specifically to discuss alternative instruction set architectures and the growing trend of open source hardware. Workshop organizer [John Davis](https://www.linkedin.com/in/johnddavis/), director of the Laboratory for Open Computer Architecture at the Barcelona Supercomputing Center summarized the current landscape well in his introduction: “It’s a great time to be around, because we have the proliferation of things like RISC-V and OpenPOWER. These are by no means the first, but it seems like the time is right for technology requirements and the success we’ve seen with open source software to translate into the open source hardware space with these open source ISAs.” + +Below you can find the abstracts of the OpenPOWER presentations presented at the conference. + +**OpenPOWER Foundation Update: New leadership and a bright open future** + +By James Kulina, Executive Director of OpenPOWER + +In this talk, James introduces himself as the new Executive Director for the OpenPOWER Foundation and provides a summary of the latest developments within the OpenPOWER community. He also covers what's ahead for the Foundation as it further integrates and strengthens its collaboration with other Linux Foundation projects. + +According to Kulina, “the vision of the OpenPOWER Foundation is to energize our member companies to start devoting energy and resources to drive a thriving ecosystem around collaborative co-development of this common license of POWER IP, cores, tools, software and systems.” He continued, “We want to make it as simple as possible to go from an idea to silicon, or to a system, or to port your software over to the POWER architecture.” + +[View James’ session on YouTube](https://www.youtube.com/watch?v=6mXxNbKM3Qs&feature=youtu.be). + +**Microwatt and GHDL - An Open Hardware CPU written in VHDL, synthesized with Open Source tools** + +By Anton Blanchard, Distinguished Engineer at OpenPOWER and Linux Kernel Hacker at IBM, and Tristan Gingold, Hardware Engineer at CERN + +_Anton and Tristan share an overview of the Microwatt core. Microwatt is a 64 bit POWER ISA soft processor, written in VHDL. Over time it has grown from supporting Micropython, to Zephyr and most recently Linux. The presentation also includes an overview of GHDL and how it can be used for both simulation and synthesis of a medium complexity VHDL project._ + +[View Anton and Tristan’s session on YouTube.](https://www.youtube.com/watch?v=4XkJCzP4_ZY&feature=youtu.be) + +**The Open Power ISA: A Summary of Architecture Compliancy Options and the Latest Foundations for Future Expansion** + +By Brian Thompto, Distinguished Engineer, POWER Processor Architect at IBM + +_The open POWER ISA enables access to unencumbered open innovation and a mature software ecosystem developed over the last 30 years. In this talk, Brian reviews the major options for architectural compliancy that provide freedom of choice in design, including four recently specified compliancy subsets, separate optional features, and custom extensions._ + +[View Brian’s session on YouTube](https://www.youtube.com/watch?v=0zIwLCnIuqg&feature=youtu.be). + +**Advanced High-Performance Computing Features of the OpenPOWER ISA** + +By Jose Moreira, Research Staff Member at IBM + +In this presentation, Jose raises awareness and interest in the newest features of the POWER ISA, which he believes can lead to further research in processor architecture and programming environments. Some of the most promising application areas include graph algorithms, classical machine learning and deep learning. + +[View Jose’s session on YouTub](https://www.youtube.com/watch?v=0zIwLCnIuqg&feature=youtu.be)e. + +If you’re interested in learning more about these sessions or about the OpenPOWER Foundation, consider joining our [LinkedIn Group](https://www.linkedin.com/groups/7460635/) or our [Slack workspace](https://openpowerfoundation.org/get-involved/slack-workspace/) where you can connect and collaborate with other OpenPOWER Foundation members. diff --git a/content/blog/openpower-barreleye-server-to-market.md b/content/blog/openpower-barreleye-server-to-market.md new file mode 100644 index 0000000..6b6ad7a --- /dev/null +++ b/content/blog/openpower-barreleye-server-to-market.md @@ -0,0 +1,73 @@ +--- +title: "OpenPOWER Members Bring Rackspace-Led Open Compute Barreleye Server to Market" +date: "2016-09-20" +categories: + - "blogs" +tags: + - "featured" + - "barreleye" +--- + +_By Sam Ponedal, Social Strategist, OpenPOWER Foundation_ + +![Barreleye-multiple-696x464](images/Barreleye-multiple-696x464.jpg) + +Back in March, we told you about how [OpenPOWER members StackVelocity, Mark III Systems, and Penguin Computing](https://openpowerfoundation.org/blogs/open-compute-summit-barreleye/) had adopted [Rackspace's Barreleye server design](https://openpowerfoundation.org/blogs/openpower-open-compute-rackspace-barreleye/). Today,we are pleased to relay the news from IBM Edge that our members have reached the next milestone in their journey, and have released their Barreleye server designs. Let's take a look at all the Barreleye news from our members: + +## [Rackspace](http://blog.rackspace.com/now-get-your-own-barreleye) + +_Originally published on Rackspace.com by Aaron Sullivan_ + +Barreleye is available from multiple outlets, to suit many kinds of consumers. From solution providers who specialize in hyperscale, to high performance computing, to the IBM business partner network, you can purchase Barreleye from a company that understands your business. + +Barreleye works with a variety of Linux distributions and KVM hypervisors. It has chassis options for those who like to keep their storage high capacity, in-box and powerful, or light-weight and low-cost. It is configurable for basic low-cost networking, or very high-throughput networking. And for a server with such a low mechanical profile, it has great PCI adapter capacity. + +If you want to test drive it, but don’t have an Open Rack handy, there’s a simple-to-use benchtop power supply (called Lunchbox) we developed along with Barreleye. Here are a few other things that Barreleye does: + +- Leverages [one of the most powerful](http://blog.rackspace.com/openpower-open-compute-barreleye#serverspecs) 2-socket servers on the planet. +- Gets your organization closer to the cutting edge of open hardware development. +- Makes a clear statement to your suppliers: you expect more freedom, value and influence. + +## [Mark III Systems](http://www.markiiisys.com/blog/2016/09/19/barreleye-general-availability/) + +_Originally published on markiiisys.com by Andy Lin_ + +Today at IBM Edge 2016, Mark III and our partners in the OpenPOWER Foundation are announcing the immediate availability of an OpenPOWER server platform based on the Barreleye Open Compute Project (OCP) design. + +We’re doing this announcement specifically in partnership with Penguin Computing under the OCP-compatible model of the Penguin Magna 1015, which provides an enterprise supported version of the Barreleye system.  As a long-time IBM Premier Business Partner with two decades of experience with POWER, our strong team of engineers are also available to offer their expertise and services around the Magna 1015 platform to ensure that our joint OpenPOWER clients are successful. + +[![magna-1015-350x407](images/magna-1015-350x407.png)](http://www.markiiisys.com/blog/wp-content/uploads/2016/09/magna-1015-350x407.png) + +  + +If you might recall, Barreleye is based on the Rackspace-led OCP design that incorporates OpenPOWER technologies (including POWER8 processors), and was a system that Mark III [announced back in March at the OCP Summit](http://www.markiiisys.com/blog/2016/03/14/mark-iii-openpower-open-compute-project-premise-barreleye/) that it would be offering very soon. + +We view the Magna 1015 (Barreleye) as fitting a key niche in our portfolio of OpenPOWER platforms, as many hyperscale users of compute have looked at or are starting to look at OCP approaches to maximizing datacenter efficiency as they grow. + +As a member of both foundations, we’re very excited about the future of both OpenPOWER and OCP in delivering highly efficient architectures for the bandwidth-intensive workloads of the next decade.  To us, Barreleye is the culmination of both these industry movements, but is also just the beginning of a new wave of innovation. + +## [Penguin Computing](http://www.penguincomputing.com/company/media/press-releases/penguin-computing-announces-openpower-server-platform-with-partner-mark-iii-systems/) + +_Originally published on penguincomputing.com_ + +Penguin Computing, a provider of high performance computing, enterprise data center and cloud solutions, today announced immediate availability of Penguin Magna 1015, an OpenPOWER based system for cloud and hyperscale data center environments. + +Based on the “Barreleye” platform design pioneered by Rackspace and promoted by the OpenPOWER Foundation and the Open Compute Project (OCP) Foundation, Penguin Magna 1015 targets memory and I/O intensive workloads, including high density virtualization and data analytics. The Magna 1015 system uses the Open Rack physical infrastructure defined by the OCP Foundation and adopted by the largest hyperscale data centers, providing operational cost savings from the shared power infrastructure and improved serviceability. + +“Penguin is all about open technologies and offering choice of platforms for the customer application”, said Jussi Kukkonen, Director, Product Management, Penguin Computing. “Penguin’s partnership with Mark III provides our customers with a unique combination of comprehensive OCP server, storage and networking catalog together with OpenPOWER architecture and applications expertise.” + +“As a fellow member of the OpenPOWER Foundation, Mark III is excited to be working with Penguin Computing on OCP solutions enabled with OpenPOWER technologies,” said Andy Lin, Vice President of Strategy, Mark III Systems. “We believe that an OCP compatible system powered by OpenPOWER processors presents a truly unique value proposition for hyperscale users of compute looking for a differentiated platform to efficiently run and scale high-bandwidth workloads, including big data analytics, HPC, and cloud.” + +## [StackVelocity](http://go.stackvelocity.com/blog/truly-enabling-an-open-source-ecosystem) + +_Originally published on go.stackvelocity.com by Doug Taylor_ + +The Open Power Foundation (OPF) stands to become a significant complement to OCP. The IBM POWER architecture, which is well known in the industry as the performance leader, has moved to an open licensing model. Through the OPF, an ecosystem of chip companies, board manufacturers, networking vendors, etc., are all driving innovation to create the next generation of Web 2.0 compute platforms that are open. + +As testament to how well OPF and OCP foundations work together, Doug Balog, General Manager for POWER Systems at IBM, announced today at the IBM EDGE event that [Barreleye](http://stackvelocity.com/hardware-solutions/openpower-solutions/) is ready for mass production and available for purchase. [Barreleye](http://stackvelocity.com/hardware-solutions/openpower-solutions/) is a powerful and highly efficient server built with OpenPOWERTM technologies and delivered through the Open Compute Foundation. StackVelocity is excited to be collaborating with the Open community by bringing [Barreleye](http://stackvelocity.com/hardware-solutions/openpower-solutions/) to market. + +StackVelocity is able to complement the performance of OpenPOWER with our very own high-density OCP storage platform called [HatTrick Storage](http://go.stackvelocity.com/stackvelocity-hattrick-data-sheet). The [HatTrick Storage](http://go.stackvelocity.com/stackvelocity-hattrick-data-sheet) platform delivers up to 15 LFF drives in the same form factor as a “Winterfell/Leopard” server, allowing up to 45 LFF drives in 2 OU—that’s a 50% increase in density over the currently available solutions. It provides substantial capacity in an extremely efficient footprint and can be configured to match any workload. + +For those customers that also need a standard EIA 19” OpenPOWER solution, we have a high-performance platform called [Saba](http://go.stackvelocity.com/saba-2u-high-performance-data-analytics-solution) that features OpenPOWER Power8TM processors to tackle the challenge of extracting value from mass amounts of information. [Saba](http://go.stackvelocity.com/saba-2u-high-performance-data-analytics-solution) can support up to 1TB of memory and 24 SFF drives. This means massive amounts of information are brought to compute resources in real time and business insight is maximized. + +These building blocks provide the core from which we can help our customers tailor [OpenPOWER solutions](http://stackvelocity.com/hardware-solutions/openpower-solutions/) that fit their unique business needs. diff --git a/content/blog/openpower-breaks-through-100-a-signal-of-even-more-innovation-to-come.md b/content/blog/openpower-breaks-through-100-a-signal-of-even-more-innovation-to-come.md new file mode 100644 index 0000000..0fc54fe --- /dev/null +++ b/content/blog/openpower-breaks-through-100-a-signal-of-even-more-innovation-to-come.md @@ -0,0 +1,20 @@ +--- +title: "OpenPOWER Breaks Through 100 ... A Signal of Even More Innovation to Come!" +date: "2015-02-17" +categories: + - "blogs" +--- + +By: _Gordon MacKean, OpenPOWER Chairman_ + +Today the OpenPOWER Foundation hit a new milestone. We are now officially 101 members strong. But, we all know numbers in themselves are not what is significant, it is what they represent. For the OpenPOWER community, it signals to us that we're on the right track and, with each new member that comes on board, our collaboration and resulting innovation multiplies. Speaking of numbers, here's a few more that illustrate our progress ... + +- **1** OpenPOWER started with one shared idea -- to drive more innovation in the data center. +- **5** Beginning with five founders -- IBM, Google, NVIDIA, Mellanox and Tyan, the OpenPOWER Foundation has exponentially grown to now ... +- **101** OpenPOWER Foundation members around the world representing a diverse set of leaders from across the technology industry. From cloud service providers and technology consumers to chip designers, hardware components, system vendors and firmware and software providers and beyond, they're all leveraging POWER's open architecture to drive innovation. OpenPOWER's membership is also geographically diverse, representing ... +- **22** countries across 6 continents with a membership roster spanning Asia, North America, South America, Australia, Africa and Europe. Note: No members from Antarctica. Yet. ;-) +- **8** To date the Foundation has chartered eight member Working Groups organized by technical focus areas of interest including interoperability, system software, memory, compliance, hardware architecture, application software, accelerators and the development of an open server development platform. The work being accomplished by these groups supports ... +- **35** confirmed member presentations detailing OpenPOWER products and projects underway that will be shared at the OpenPOWER Foundation's debut conference, the OpenPOWER Summit, taking place at the San Jose Convention Center March 17-19. So hurry, there's only .... +- **28** days left until the Summit begins! Come and join us. Learn more and register today by going to [www.openpowerfoundation.org/2015-summit](http://www.openpowerfoundation.org/2015-summit) + +Looking forward to seeing you in San Jose! Now, let's get back to collaborating and innovating. diff --git a/content/blog/openpower-breaks-through-100.md b/content/blog/openpower-breaks-through-100.md new file mode 100644 index 0000000..9e9b764 --- /dev/null +++ b/content/blog/openpower-breaks-through-100.md @@ -0,0 +1,22 @@ +--- +title: "OpenPOWER Breaks Through 100 ... A Signal of Even More Innovation to Come!" +date: "2015-02-17" +categories: + - "blogs" +--- + +1 OpenPOWER started with one shared idea -- to drive more innovation in the data center. + +5 Beginning with five founders -- IBM, Google, NVIDIA, Mellanox and Tyan, the OpenPOWER Foundation has exponentially grown to now ... + +101 OpenPOWER Foundation members around the world representing a diverse set of leaders from across the technology industry.  From cloud service providers and technology consumers to chip designers, hardware components, system vendors and firmware and software providers and beyond, they're all leveraging POWER's open architecture to drive innovation. OpenPOWER's membership is also geographically diverse, representing ... + +22 countries across 6 continents with a membership roster spanning Asia, North America, South America, Australia, Africa and Europe. Note: No members from Antarctica. Yet. ;-) + +8 To date the Foundation has chartered eight member Working Groups organized by technical focus areas of interest including interoperability, system software, memory, compliance, hardware architecture, application software, accelerators and the development of an open server development platform. The work being accomplished by these groups supports ... + +35 confirmed member presentations detailing OpenPOWER products and projects underway that will be shared at the OpenPOWER Foundation's debut conference, the OpenPOWER Summit, taking place at the San Jose Convention Center March 17-19. So hurry, there's only .... + +28 days left until the Summit begins! Come and join us. Learn more and register today by going to [www.openpowerfoundation.org/2015-summit](http://www.openpowerfoundation.org/2015-summit) + +Looking forward to seeing you in San Jose! Now, let's get back to collaborating and innovating. diff --git a/content/blog/openpower-cognitive-cup.md b/content/blog/openpower-cognitive-cup.md new file mode 100644 index 0000000..2548945 --- /dev/null +++ b/content/blog/openpower-cognitive-cup.md @@ -0,0 +1,43 @@ +--- +title: "Develop Exciting Cognitive Applications in the OpenPOWER Developer Challenge" +date: "2016-07-05" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Mike Gschwind, Chief Engineer, Machine Learning and Deep Learning, IBM_ + +Cognitive Applications have transformed the face of computing and how humans interact with computers. Some examples are driver-assistive technologies for enhanced road safety, personalized assistants like Siri and Google Now for improved productivity; and enhanced public security through advanced threat detection. Reflecting the increasing importance of cognitive applications, when we launched the [OpenPOWER Developer Challenge](http://openpower.devpost.com) earlier this month we included a competition around developing cognitive applications: the Cognitive Cup! + +## Deep Learning on OpenPOWER + +Developers of many cognitive applications are no longer developing using imperative, functional, logic, or object-oriented programming languages, but in the language of the brain:  artificial neural networks, or ANNs. ANNs are the cognitive development infrastructure of choice, and with them, developers are “programming with data”.  Rather than coding desired outcomes, developers teach applications by training them with a training corpus by associating a desired outcome with each training sample.  This way of teaching a computer is a sub-branch of machine learning that is referred to as “deep learning”. + +Like traditional programming environments, deep learning has its compilers and IDEs, known under the name of “Deep learning Frameworks”, and IBM recently released an entire application suite of [Deep Learning Frameworks optimized for OpenPOWER](http://bit.ly/1P3YBFi). These frameworks, hosted in the [SuperVessel OpenPOWER development cloud](http://www.ptopenlab.com), provide the development environment for the [Cognitive Cup](http://openpower.devpost.com/details/cognitive_cup). + +OpenPOWER is all about creating a broad ecosystem with opportunities to accelerate your workloads.  For the Cognitive Cup, we provide two types of accelerators: GPUs and FPGAs.  GPUs are used by the Deep Learning framework to train your neural network.  When you want to use the neural network during the “classification” phase, you have a choice of Power CPUs, GPUs and FPGAs.  Learn more about FPGA acceleration for the classification at our upcoming [Google Hangout showcasing how you can use Xilinx FPGAs to accelerate deep neural networks using AccDNN in the Supervessel cloud](http://bit.ly/29aSrUY). + +## Compete in the Cognitive Cup + +The Cognitive Cup has three categories, varying in difficulty, to give newcomers an opportunity to develop their first cognitive applications or experienced developers the opportunity to showcase their advanced skills. The three categories are: + +- **ArtNet****:** Develop an application that recognizes artworks, styles, periods, artists, and artistic techniques.  By defining a network and training it with existing artwork, create an application that speaks to the inner art connoisseur in you! We invite you to use your own imagination on what a cognitive application can do when meeting the world of Art. To get started, check out the [WikiArt](http://www.wikiart.org/) database. +- **TuneNet****:** Application development is a difficult undertaking, and TuneNet is an invitation to develop assistive programmer technologies.  Train a neural network to give developers recommendations about possible bugs and performance bottlenecks.  Initial academic work in this area is promising. Read more on this, here: + - ["Recognizing Correct Code", Hardesty, MIT](http://news.mit.edu/2016/faster-automatic-bug-repair-code-errors-0129) + - ["Automatic Patch Generation by Learning Correct Code", Long & Rinard, MIT CSAIL](https://people.csail.mit.edu/fanl/papers/prophet-popl16.pdf) + - ["Building Program Vector Representations for Deep Learning", Mou, Li, Liu, Peng, Jin, Xu, & Zhang, Peking University](https://arxiv.org/pdf/1409.3358.pdf) + - ["Combining Deep Learning with Information Retrieval to Localize Buggy FIles for Bug Reports, Lam, Nguyen, Nguyen, & Nguyen](https://www.computer.org/csdl/proceedings/ase/2015/0025/00/0025a476.pdf) + - ["Deep Learning on Disassembly Data", Davis & Wolff](https://www.blackhat.com/docs/us-15/materials/us-15-Davis-Deep-Learning-On-Disassembly.pdf) +- **YourNet****:** If you find the previous categories too limiting, then you’ll love this category.  We’re letting you find your own challenge and solve it with a cognitive application! From recognizing animals based on their photos, birds based on ther song, or any number of other ideas; let your imagination fly! + +While the Cognitive Cup is a track of its own in the OpenPOWER Developer Challenge, it is not isolated from the other application development opportunities, in fact, it’s quite the opposite!  TuneNet can create new applications to help application development for the The Open Road Test track, and the Spark Challenge to build scaleable accelerated applications can be combined with the Cognitive Cup to harness the power of clusters for your cognitive application. We’ll even reward bonus points to solutions that combine the Cognitive Cup and the Spark Rally.  To help you combine Spark parallelism with Cognitive Applications, these tracks use a common cloud image that includes both our deep learning frameworks and Spark. + +### To learn more about the SuperVessel environment, [watch our Google Hangout](http://bit.ly/296L5Rw) and hear from experts on how to access and sign up for a SuperVessel virtual machine. + +### To sign up for the Developer Challenge, visit [http://openpower.devpost.com](http://openpower.devpost.com). + +* * * + +[![](images/33601413.jpg)](https://openpowerfoundation.org/wp-content/uploads/2016/02/mkg.jpeg)_Dr. Michael Gschwind is Chief Engineer for Machine Learning and Deep Learning for IBM Systems where he leads the development of hardware/software integrated products for cognitive computing. During his career, Dr. Gschwind has been a technical leader for IBM's key transformational initiatives, leading the development of the OpenPOWER Hardware Architecture as well as the software interfaces of the OpenPOWER Software Ecosystem. In previous assignments, he was a chief architect for Blue Gene, POWER8, POWER7, and Cell BE. Dr. Gschwind is a Fellow of the IEEE, an IBM Master Inventor and a Member of the IBM Academy of Technology._ diff --git a/content/blog/openpower-deep-learning-distribution.md b/content/blog/openpower-deep-learning-distribution.md new file mode 100644 index 0000000..483b581 --- /dev/null +++ b/content/blog/openpower-deep-learning-distribution.md @@ -0,0 +1,48 @@ +--- +title: "New OpenPOWER Software Distribution Puts Deep Learning a Click Away" +date: "2016-05-27" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Michael Gschwind, Chief Engineer, Machine Learning and Deep Learning, IBM Systems_ + +I am pleased to announce that several major deep learning frameworks are **[now available](http://ibm.co/1YpWn5h)** on OpenPOWER as software "distros" (distributions) that are easily installable using the Ubuntu system installer. + +![open key new 5](images/open-key-new-5.jpg) + +As evidenced by new deep learning announcements and use cases from IBM Power Systems users like [University of Maryland Baltimore County](http://www.techtimes.com/articles/157356/20160510/umbc-ibm-collaborate-cybersecurity.htm), [the University of Illinois](http://www.zdnet.com/article/ibm-is-building-a-cognitive-computing-research-center-with-the-university-of-illinois/), and the [STFC-Hartree Centre](http://insidehpc.com/2015/06/uk-hartree-center-partners-with-ibm-on-big-data/), OpenPOWER is fast emerging as the premier platform for cognitive computing. + +## Why Deep Learning? + +Deep learning, or the use of multi-layer neural networks, has revolutionized speech recognition, natural language processing, and computer vision, and continues to revolutionize IT due to availability of rich data sets, new methods for accelerating neural network training and extremely fast hardware with GPU accelerators. + +Deep Learning can be used from safety systems to personal assistants to enterprise systems. For example, driver assist technologies rely on machine and deep learning patterns to recognize objects in a rapidly changing environment and personal digital assistant technology is learning to categorize e-mail, text messages, and other content based on context.  In the enterprise, machine and deep learning applications can identify high value sales opportunities, enable smart call center automation, detect and react to intrusion or fraud, and suggest solutions to technical or business problems. + +## Key Deep Learning Frameworks on OpenPOWER + +Frameworks now available on OpenPOWER as pre-built binaries optimized for GPU acceleration include: + +- **[Caffe](http://caffe.berkeleyvision.org/)**, a dedicated artificial neural network (ANN) training environment developed by the Berkeley Vision and Learning Center at the University of California at Berkeley +- **[Torch](http://torch.ch/)**, a framework consisting of several ANN modules built on an extensible mathematics library +- **[Theano](http://deeplearning.net/software/theano/)**, another framework consisting of several ANN modules built on an extensible mathematics library + +In addition to pre-built and optimized binaries for OpenPOWER with acceleration we’ve ensured that these environments may be built from the source repository for those who prefer to compile their own binaries.   We've also enabled the DL4J (Deep Learning 4 Java), TensorFlow and CNTK frameworks for POWER and are working with these communities to ensure POWER support for these environments out-of-the-box. + +## POWER8: Ideal for Deep Learning + +POWER8 is ideal for deep learning, big data, and machine learning due to its high performance, large caches, 2x-3x higher memory bandwidth, very high I/O bandwidth, and of course, tight integration with GPU accelerators. POWER8’s parallel, multi-threaded architecture with high memory and I/O bandwidth is particularly well adapted to ensure that GPUs are used to their fullest potential. Today, these software packages are available on the [IBM Power System 822LC](https://www.ibm.com/marketplace/cloud/high-performance-computing/us/en-us) server that features two POWER8 CPUs along with two NVIDIA Tesla K80s. + +We are currently working on optimizing the deep learning software to take advantage of the [next generation IBM Power Systems server that will have POWER8 CPUs connected by the high-speed NVLink interface directly to NVIDIA Tesla P100 (Pascal) GPU accelerators](https://www.ibm.com/blogs/systems/ibm-power8-cpu-and-nvidia-pascal-gpu-speed-ahead-with-nvlink/). This brings a huge advantage to cognitive computing applications like deep learning by giving applications running on the GPU fast access to large system memory via the NVLink interface to the CPU. + +Coupled with the higher performance POWER8 CPUs, the overall workflow for applications like voice recognition, natural language processing, and computer vision that employ deep learning benefits from a massive performance leap thanks to data-centric system design and optimization. + +## [To get started with the MLDL frameworks, download the installation instructions here](http://ibm.co/1YpWn5h). + +Contact me at [mkg@us.ibm.com](mailto:mkg@us.ibm.com) to get started with an evaluation. + +* * * + +[![](images/33601413.jpg)](https://openpowerfoundation.org/wp-content/uploads/2016/02/mkg.jpeg)_Dr. Michael Gschwind is Chief Engineer for Machine Learning and Deep Learning for IBM Systems where he leads the development of hardware/software integrated products for cognitive computing. During his career, Dr. Gschwind has been a technical leader for IBM's key transformational initiatives, leading the development of the OpenPOWER Hardware Architecture as well as the software interfaces of the OpenPOWER Software Ecosystem. In previous assignments, he was a chief architect for Blue Gene, POWER8, POWER7, and Cell BE. Dr. Gschwind is a Fellow of the IEEE, an IBM Master Inventor and a Member of the IBM Academy of Technology._ diff --git a/content/blog/openpower-developer-challenge-finalists.md b/content/blog/openpower-developer-challenge-finalists.md new file mode 100644 index 0000000..912bc0c --- /dev/null +++ b/content/blog/openpower-developer-challenge-finalists.md @@ -0,0 +1,29 @@ +--- +title: "OpenPOWER Developer Challenge Finalists Announced" +date: "2016-10-01" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Calista Redmond, President, OpenPOWER Foundation_ + +https://youtu.be/55MtoqycQGM + +Recently at [IBM Edge](https://ibmgo.com/edge2016) I had the pleasure of announcing the Finalists of the 2016 [OpenPOWER Developer Challenge](http://openpower.devpost.com/).   From the kick-off of this global challenge in the Spring – a first-ever for the OpenPOWER Foundation – to the many MeetUps and Google Hangouts where we met Developers and Challenge participants around the world, it’s been a fantastic journey, and it’s not over yet. + +Hundreds of Developer Challenge participants worked throughout the summer using the [SuperVessel Developer Cloud](https://ptopenlab.com/cloudlabconsole/) to port, optimize, accelerate and scale HPC, Big Data & Analytics and Deep Learning applications on OpenPOWER.  They had access to hardware acceleration technologies including [NVIDIA GPUs](https://www.youtube.com/watch?v=vn5IpPHfuxk) and [FPGAs from Xilinx](https://www.youtube.com/watch?v=Zq93jQmCuLU), advanced development tools and programming frameworks [including the IBM XL Compilers](https://www.ibm.com/developerworks/community/groups/service/html/communitystart?communityUuid=572f1638-121d-4788-8bbb-c4529577ba7d) and the [Linux on Power SDK](https://www-304.ibm.com/webapp/set2/sas/f/lopdiags/sdklop.html), and programming frameworks like Apache Spark and the [OpenPOWER Deep Learning Software Distribution](https://openpowerfoundation.org/blogs/deep-learning-options-on-openpower/). + +The six projects qualifying as Finalists are: + +- [Emergency Prediction on Spark](http://devpost.com/software/emergencypredictiononspark): Antonio Carlos Furtado from the University of Alberta predicts Seattle emergency call volumes with Deep Learning on OpenPOWER +- [Medical Ultrasound on CAPI](http://devpost.com/software/medical-ultrasound-imaging-acceleration-based-on-capi): South China University accelerates Delay-and-Sum with POWER and CAPI-attached FPGAs to bring more speed to Cloud-based medical imaging +- [OpenRBC Simulation](http://devpost.com/software/openrbc): Brown University gets closer to cracking the code on Red Blook Cell disorders using computational models on OpenPOWER. +- [TensorFlow Cancer Detection](http://devpost.com/software/distributedtensorflow4cancerdetection): Altoros Labs brings a turbo boost to automated cancer detection with OpenPOWER. +- [ArtNet Genre Classifier](http://devpost.com/software/artnet-genre-classifier): Praveen Sridhar and Pranav Sridhar turn OpenPOWER into an art connoisseur. +- [Scaling Up and Out a Bioinformatics Algorithm](http://devpost.com/software/scaling-up-and-out-a-bioinformatics-algorithm): Delft University of Technology advances precision medicine by scaling up and out on OpenPOWER. + +All six of these projects will be awarded either a Grand, 2nd or 3rd Prize – stay tuned for the Grand Prize and rankings announcement during the upcoming [OpenPOWER Foundation Summit in Barcelona](https://openpowerfoundation.org/openpower-summit-europe/).  Finally, plan to join the Grand Prize winners with IBM and OpenPOWER at [SC16](http://sc16.supercomputing.org/) in Salt Lake City. + +In 2016 Developers became the stars of the OpenPOWER Foundation, and this is just the beginning! Want to learn more about developing on Power? Visit the new [Linux on Power Developer Portal](https://developer.ibm.com/linuxonpower/). diff --git a/content/blog/openpower-developer-challenge-kinetica.md b/content/blog/openpower-developer-challenge-kinetica.md new file mode 100644 index 0000000..5a0ee8a --- /dev/null +++ b/content/blog/openpower-developer-challenge-kinetica.md @@ -0,0 +1,20 @@ +--- +title: "Why the OpenPOWER Developer Challenge is Important to Kinetica" +date: "2016-07-15" +categories: + - "blogs" +--- + +_By Amit Vij, CEO, Kinetica_ + +![kinetica_plainCA_BLACKLETTERSTRANSPARENTBG-copy-2](images/kinetica_plainCA_BLACKLETTERSTRANSPARENTBG-copy-2.png) + +At [Kinetica](http://www.kinetica.com) (formerly GPUdb), we have experienced first-hand how the massive hardware acceleration improvements made possible by OpenPOWER can have truly transformational benefits for enterprises. We are in the business of helping customers uncover new business insights in real-time from massively growing volumes of data, often spanning IoT and other streaming data sources. We simply couldn’t solve our customer’s problems with traditional data technologies. OpenPOWER not only makes it possible to deliver massive data processing performance gains at a fraction of the cost, it also allows our customers to tackle brand new challenges and to make the world a better place for us all. + +We were proud to be recognized by IDC recently with an [HPC Innovation Excellence Award](http://www.kinetica.com/press-release-usps/) for our work with the United States Postal Service. The USPS deploys Kinetica’s real-time data analytics and advanced visualization capabilities to [deliver goods more efficiently to more than 154 million addresses across the United States](http://www.gpudb.com/wp-content/uploads/2016/06/KineticaUSPSCaseStudy.pdf). + +[![OpenPOWER-Developer-Challenge_Banner02_800x320](images/OpenPOWER-Developer-Challenge_Banner02_800x320.jpg)](http://openpower.devpost.com) + +We get excited every time we learn about a new compelling use case and we’re equally as excited to hear entirely new ideas from developers participating in the [OpenPOWER Developer Challenge](http://openpower.devpost.com). Our CTO, Nima Negahban, is one of the judges and he is extremely eager to inspect the submissions as they arrive. He will also participate in the awards ceremony at SC16, helping to recognize the winning teams. + +## We have now reached the halfway point, so please remember to register if you haven’t already done so by going to [http://openpower.devpost.com](http://openpower.devpost.com). diff --git a/content/blog/openpower-developer-challenge.md b/content/blog/openpower-developer-challenge.md new file mode 100644 index 0000000..f73f1ea --- /dev/null +++ b/content/blog/openpower-developer-challenge.md @@ -0,0 +1,33 @@ +--- +title: "Announcing the OpenPOWER Developer Challenge: Tap the Power of Open" +date: "2016-04-06" +categories: + - "blogs" +--- + +_By Randall Ross, Ubuntu Community Manager, Canonical_ + +One thing that unites my work at Canonical as an Ubuntu Community Manager with my work for the OpenPOWER Foundation is both organizations’ clear and unrelenting passion for developers. They both know that developers are the true musicians when it comes to making OpenPOWER “sing”. As OpenPOWER member GPUdb said, “We’re making the instrument but they \[developers\] are making the song.” + +![M1 GPUdb](images/M1-GPUdb-1024x512.png) + +Without developers, hardware is like a high-performance exotic car sitting on a dealer's lot. We have the technology, but we need someone to drive that car to the Autobahn and “floor it!” (Having a relaxed speed limit helps.) + +We know that our OpenPOWER community has plenty of drivers waiting for the opportunity to show what they can do. You may have noticed several developer-focused activities and news items coming from OpenPOWER over the past few weeks. That's no coincidence. It’s because we’ve been ramping up to share some very exciting news: we are pleased to announce the first ever [OpenPOWER Developer Challenge](http://openpower.devpost.com/)! + +[![Tap into performance tile](images/Tap-into-performance-tile-1024x577.jpg)](http://openpower.devpost.com) + +Show us what you can do with OpenPOWER technology and you could win a whole range of prizes, from Apple Watches to an all-expenses paid trip to Supercomputing 2016 to showcase your work in front of developers and IT leaders from around the world! Just go to [http://openpower.devpost.com](http://openpower.devpost.com/) to register. + +The [OpenPOWER Developer Challenge](http://openpower.devpost.com/) allows you to participate in two ways: + +- Port and optimize your code in the Open Road Test, and use accelerators to go even faster +- Join the Spark Rally to train an accelerated deep neural network to recognize objects with greater activity, then show us how you can scale with Apache Spark + +There is no limit to the number of entries you submit, so long as they are their own unique applications! + +The submission period will open on May 1, and closes on August 2, so start forming teams and thinking of project ideas now! + +To get started, let's take a tour of the Supervessel virtual environment that you will be using to build your application. + +https://www.youtube.com/watch?v=C08bfOHt3kw diff --git a/content/blog/openpower-developer-resources.md b/content/blog/openpower-developer-resources.md new file mode 100644 index 0000000..7624afe --- /dev/null +++ b/content/blog/openpower-developer-resources.md @@ -0,0 +1,32 @@ +--- +title: "Go Global with OpenPOWER Developer Resources" +date: "2016-03-21" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Sam Ponedal, Social Strategist for OpenPOWER, IBM_ + +[![OpenPOWER Developer Map Social Tile](images/OpenPOWER-Developer-Map-Social-Tile.jpg)](http://developers.openpowerfoundation.org) + +There are many faces that make up the OpenPOWER Foundation and its ecosystem. We have [hardware manufacturers who provide the cutting edge technology](https://openpowerfoundation.org/blogs/capi-drives-business-performance/) that serves as the hardware platform for OpenPOWER. We have [MSPs that install and leverage OpenPOWER](https://openpowerfoundation.org/blogs/openpower-open-compute-rackspace-barreleye/) technology for their customers, and we have [researchers and universities who are applying OpenPOWER technology to solve global problems](https://openpowerfoundation.org/videos/video-ibm-and-openpower-partner-with-oak-ridge-national-labs-to-solve-worlds-toughest-challenges/). But perhaps the most important individuals working in the OpenPOWER ecosystem are developers. Currently, there are over 1,000 ISVs who have built applications for OpenPOWER, and we know that in order for our ecosystem to grow, we need to keep making OpenPOWER the most accessible open platform for developers, with the performance capabilities to make the most blazing fast applications to boot. With features like CAPI and GPU acceleration, OpenPOWER provides developers with the performance to truly make their applications sing. In addition, OpenPOWER uses the same familiar tools like Linux, CUDA, and big and little endians so that developers can apply the skills they already have to building new applications. + +But how do developers access OpenPOWER hardware for testing and development? To answer that important question, today we are pleased to announce the [OpenPOWER Developer Resources Map](http://bit.ly/25e6Zag), available at [http://developers.openpowerfoundation.org](http://bit.ly/25e6Zag). This interactive and free tool can help developers locate in-person and virtual development resources that best suit their unique needs and goals for their project. Interested in getting hands-on in-person with the POWER8 chip architecture? Visit one of our members' open developer facilities in your local area. Want to go virtual and explore how CAPI can accelerate your application on the OpenPOWER platform? Leverage [Supervessel](https://ptopenlab.com/cloudlabconsole/#/) or our other CAPI-enabled developer clouds. Developers simply log-in, select the filters that best suit their project, and then pinpoint the best resource for them. + +[![openpower developer map](images/openpower-developer-map.png)](http://developers.openpowerfoundation.org) + +This new tool complements our existing library of developer tools and information, which features: + +- Development Environments and VMs +- Development Systems +- Technical Specifications +- Software +- Developer Tools + +To learn more about the OpenPOWER tools available to developers, visit our [Technical Resources page](https://openpowerfoundation.org/technical/technical-resources/). + +Developers are the key to the growth of the OpenPOWER ecosystem, and with the greatest minds in the world working on building cutting edge, high-performance on OpenPOWER, the world's only truly open hardware architecture, we're excited about the possibilities. If you're a developer and looking for how you can get more involved with OpenPOWER, stay tuned for more as we're going to be announcing some exciting developer-focused initiatives in the coming months. + +Have questions or want to know more about what we offer? Let us know in the comments below! Happy coding! diff --git a/content/blog/openpower-ecosystem-propels-open-innovation-in-data-center.md b/content/blog/openpower-ecosystem-propels-open-innovation-in-data-center.md new file mode 100644 index 0000000..9b4c01b --- /dev/null +++ b/content/blog/openpower-ecosystem-propels-open-innovation-in-data-center.md @@ -0,0 +1,43 @@ +--- +title: "OpenPOWER Ecosystem Propels Open Innovation in Hyperscale Data Centers" +date: "2016-04-06" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +#### Google and Rackspace Develop OpenPOWER System for the Open Compute Project; IBM Announces Intent to Expand Line of POWER-based Scale-out Linux Servers + +OPENPOWER SUMMIT, San Jose, Calif. – April 6, 2016: The [OpenPOWER Foundation](https://openpowerfoundation.org/), a consortium of more than 200 leading technology companies, organizations and individuals innovating around the POWER processor, today announced more than 50 new open innovations to help companies better solve grand challenges around big data. + +Many new community innovations [unveiled](https://openpowerfoundation.org/press-releases/openpower-foundation-reveals-new-servers-and-big-data-analytics-innovations/) today are designed to be incorporated into the [Open Compute Project](http://www.opencompute.org/) product portfolio. + +Among these, Google, a founding member of the OpenPOWER Foundation, [announced](https://cloudplatform.googleblog.com/2016/04/Google-and-Rackspace-co-develop-open-server-architecture-based-on-new-IBM-POWER9-hardware.html) today that it is developing a next-generation OpenPOWER and Open Compute Project form factor server. Google is working with Rackspace to co-develop an open server specification based on the new POWER9 architecture, and the two companies will submit a candidate server design to the Open Compute Project. + +Additionally, Rackspace has announced that “Barreleye” has moved from the lab to the data center.  Rackspace anticipates “Barreleye” will move into broader availability throughout the rest of the year, with the first applications on the Rackspace Public Cloud powered by OpenStack.  Rackspace and IBM collectively contributed the “Barreleye” specifications to the Open Compute Project in January 2016. The specifications were formally accepted by the Open Compute Project in February 2016. + +“Today’s IT infrastructure leaders seek open technology alternatives to processor and system architectures,” said John Zannos, Chairman of the OpenPOWER Foundation, and Vice President of Cloud Channels and Alliances, Canonical. “Customized solutions and open building blocks are quickly becoming required options for system design. Collaborative innovation, the hallmark of both the OpenPOWER Foundation and the Open Compute Project, is essential to building the next generation data center.” + +“We’re thrilled to take the next step in our work with the OpenPOWER and Open Compute Project communities,” said Maire Mahony, Hardware Engineering Manager, Google, and OpenPOWER Foundation Board Member. “We are committed to open innovation, and to optimizing performance and cost in data centers. Working with Rackspace, we will submit a POWER9 server design to the Open Compute Project that will address the diverse requirements of end customers for data center services.” + +“We are excited to work with Google on our POWER9 OpenPOWER-based, Open Compute Project form factor server,” said Aaron Sullivan, Open Compute Project Incubation Committee Member and Distinguished Engineer at Rackspace. “OpenPOWER processors combined with acceleration technology are fundamentally changing server and data center design today and into the future. OpenPOWER provides a great platform for the speed and flexibility needs of hyperscale operators as they demand ever-increasing levels of scalability.” + +“Our ongoing work with the OpenPOWER Foundation is a natural extension of our commitment to open collaboration and innovation in data center technology,” said Amber Graner, Director of Operations, Community Manager, the Open Compute Project. “The Open Compute Project is focused on efficiency, flexibility, and openness—and we recognize the importance of the POWER processor and the robust OpenPOWER ecosystem for the future of server design.” + +The Open Compute Project is a member of the OpenPOWER Foundation [Advisory Group](https://openpowerfoundation.org/about-us/advisory-group/). + +**IBM Expands its Linux-only Portfolio Leveraging OpenPOWER Innovation** + +[IBM](http://www.ibm.com/it-infrastructure/us-en/index-e.html) announced that it plans to add systems to its [LC line of servers](http://www-03.ibm.com/systems/power/hardware/linux-lc.html). The LC line, launched in October of 2015, infuses OpenPOWER technology into IBM’s scale-out server lineup. As a result of dozens of proof of concepts in areas like hyperscale data centers, high performance computing and large enterprises, IBM intends to make the following additions to the LC line, aimed at helping clients on the path to becoming cognitive businesses and furthering IBM’s commitment to open and collaborative innovation: + +- IBM intends to add Open Compute Project-compliant systems to its Power Systems LC portfolio to support big data analytics and cognitive applications in the cloud. This is in addition to [three other](https://openpowerfoundation.org/blogs/open-compute-summit-barreleye/) OpenPOWER Foundation members that recently announced plans for Open Compute Project-compliant, OpenPOWER systems:  Mark III Systems, Penguin Computing and Stack Velocity. +- [SUPERMICRO](http://www.supermicro.com/index_home.cfm) is currently developing two new POWER-based servers for IBM. The systems are based on the company’s “Ultra” architecture and IBM intends to add them to the LC server line to add further design options. The two systems – a storage rich 2 socket, 2U design and a dense 2 socket, 1U design – will be POWER-based, GPU and CAPI acceleration enabled and fine-tuned for cloud and cognitive workloads. +- IBM, in collaboration with [NVIDIA](http://www.nvidia.com/content/global/global.php) and [Wistron](http://www.wistron.com/), plans to release its second-generation OpenPOWER high performance computing server, which includes support for the NVIDIA® Tesla® Accelerated Computing platform ([learn more](http://www.ibm.com/blogs/systems/ibm-power8-cpu-and-nvidia-pascal-gpu-speed-ahead-with-nvlink)). The server will leverage POWER8 processors connected directly to the new NVIDIA Tesla P100 GPU accelerators via embedded NVIDIA NVLink™ high-speed interconnect technology. Early systems will be available in Q4 2016. Additionally IBM and NVIDIA plan to create global acceleration labs to help developers and ISVs port applications on the POWER8 and NVIDIA NVLink™ based platform. + +**About the OpenPOWER Foundation** The OpenPOWER Foundation is a global, open development membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. OpenPOWER members share expertise, investment and server-class intellectual property to develop solutions that serve the evolving needs of technology customers. + +The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. + +For further details visit [www.openpowerfoundation.org](https://openpowerfoundation.org/). diff --git a/content/blog/openpower-ecosystem-spurs-innovation.md b/content/blog/openpower-ecosystem-spurs-innovation.md new file mode 100644 index 0000000..3f39382 --- /dev/null +++ b/content/blog/openpower-ecosystem-spurs-innovation.md @@ -0,0 +1,43 @@ +--- +title: "OpenPOWER Ecosystem Spurs Innovation in AI and Hyperscale Datacenters" +date: "2018-03-18" +categories: + - "blogs" +--- + +Industry leaders like Google, Uber, Hitachi, Inspur, and Atos came together with representatives from the 325 OpenPOWER Foundation members today at the OpenPOWER Summit 2018 in Las Vegas, NV, to discuss how OpenPOWER is helping to transform their businesses and fuel data-intensive workloads and AI innovations. Joining them were some of the world’s leading software, hardware and cloud vendors, who discussed over 100 new OpenPOWER-based products that they are bringing to market through collaborative innovation that provide differentiated benefits. + +## Putting POWER9 to Work + +In December 2017 IBM revealed the all-new IBM POWER9 CPU, built from the ground-up for enterprise AI and other data-intensive workloads. Thanks to the Foundation’s model of open collaboration, members are already revealing their own POWER9-based products. In a panel discussion of some of the world’s leading hardware vendors, panelists from Wistron, Hitachi, Inspur, Rackspace, and Atos, all detailed how their customers and the industry are looking for x86 alternatives, and with industry-exclusive technology like next-generation NVIDIA NVLink, OpenCAPI and PCIe Gen4, POWER9 is a great fit. + +“Through its Escala server product line Atos, a global leader in digital transformation, has been actively contributing to Power technology for over 25 years”, said Rene Verkerk, Business Unit Director, Escala, Atos. “Through OpenPOWER, we are bringing leading edge developments such as Machine Learning, inference on FPGA’s and Open Source Databases into Enterprise class infrastructures and solutions with the performance of POWER9. + +OpenPOWER innovators revealed over 100 new OpenPOWER-based products that take advantage of the latest Power innovations. As the only processor with OpenCAPI and PCIe Gen4, POWER9 gives OpenPOWER members nearly 10x the I/O bandwidth of x86 with shared memory coherence, opening up all new means to deliver value to their customers. + +“Mellanox enables the highest interconnect bandwidth, lowest latency, and best efficiency for high performance, data intensive and artificial intelligence applications,” said Scot Schultz, senior director, HPC / Artificial Intelligence and Technical Computing at Mellanox Technologies, “The combination of Mellanox solutions and IBM POWER9 processor provides our customers with leading compute and storage infrastructure.” + +“Our innovation with OpenPOWER Foundation members like IBM has created breakthrough technologies for accelerating HPC and AI,” said Ian Buck, vice president and general manager of the accelerated computing group at NVIDIA. “IBM and NVIDIA enabled the next-generation of GPU accelerated servers with NVIDIA NVLink to connect Power9 CPUs and Volta GPUs, and power the leadership class supercomputers at Oak Ridge and Lawrence Livermore National Labs” + +Amongst the new solutions revealed announced were: + +- New POWER servers from Hitachi, Atos, Wistron, Inspur, Supermicro, Inventec, Rackspace, Gigabyte, Raptor, and more. +- New OpenCAPI devices from Nallatech, Mellanox, Alpha-Data, Xilinx, Amphenol, Cavium, Rambus that take advantage of coherence and up to 9.5x more memory bandwidth than x86. +- New PCIe Gen4 devices from Broadcom, NEC, and Eiditicom that accelerate storage, networking, and compute functions on OpenPOWER platforms. +- New OpenPOWER-compatible software offerings from ISVs H20, brytlyt, MapD, Elinar Oy, and more that drive AI and modern data workloads. +- [See the complete list of new products here.](https://openpowerfoundation.org/wp-content/uploads/2018/03/Hardware-Reveal-Flyerv2018-v1.pdf) + +## Forecasting OpenPOWER Clouds with a Mix of AI + +As more and more OpenPOWER-driven offerings come to market, the availability and versatility of tools for datacenters continues to expand, and already forward-thinking organizations are seeing the benefits. + +- Google announced that their IBM POWER9-based server, Zaius, is deployed and in the process of scaling up in their Data Center. Google's Maire Mahoney declared Zaius "Google Strong" and they are actively adding new production workloads onto Zaius and POWER9. +- Uber revealed that they intend to push the boundaries of distributed deep learning and make Horovod, one of their AI projects, successfully scale on extremely large clusters and supercomputers by using the Summit supercomputer at Oak Ridge National Labs. Horovod is one of Uber’s many AI initiatives, and machine learning helps the company in everything from identifying fraudulent accounts to better driver routing and more accurate pricing. +- PayPal used IBM’s OpenPOWER Systems and PowerAI to accelerate deep learning research for fraud prevention by unlocking the computation power on extra large datasets with the Power architecture. +- Tencent, a hyperscale datacenter provider, recently purchased a number of OpenPOWER-based systems to add to its growing enterprise data center. With adoption of OpenPOWER technology, Tencent’s overall efficiency has improved by more than 30%, and with savings of 30% on rack resources and 30% on server resources. +- Ali Cloud, the cloud arm of online retailer Alibaba, said they have included OpenPOWER-based servers on their Ali X-Dragon Cloud platform and have invited customers to this pilot platform. Deployed in less than a month, the ease of use and compatibility of the servers left Ali Cloud impressed. +- LimeLight, who gives clients tools to help stream digital content like music and video, has embraced OpenPOWER to get around the PCIe Gen3 bottleneck on x86. By using PCIe Gen4 on POWER9, their clients can deliver content to their customers faster and with less buffering. + +The announcements made today at the OpenPOWER Summit 2018 are inspiring and exciting. With these partners involved and new products available, the OpenPOWER ecosystem is poised to reach new heights in 2018 and beyond. + +[Click here to view a recording of keynote sessions at OpenPOWER Summit 2018.](https://www.youtube.com/watch?v=9tmWN9PR-ZU) diff --git a/content/blog/openpower-firmware-technical-training.md b/content/blog/openpower-firmware-technical-training.md new file mode 100644 index 0000000..8466dde --- /dev/null +++ b/content/blog/openpower-firmware-technical-training.md @@ -0,0 +1,8 @@ +--- +title: "OpenPOWER Firmware Technical Training" +date: "2015-02-16" +categories: + - "blogs" +--- + +![2015OpenPOWER FirmwareTechnicalTraining](images/2015OpenPOWER-FirmwareTechnicalTraining.png) diff --git a/content/blog/openpower-foundation-announces-2016-openpower-summit-revolutionizing-the-datacenter.md b/content/blog/openpower-foundation-announces-2016-openpower-summit-revolutionizing-the-datacenter.md new file mode 100644 index 0000000..913e2af --- /dev/null +++ b/content/blog/openpower-foundation-announces-2016-openpower-summit-revolutionizing-the-datacenter.md @@ -0,0 +1,33 @@ +--- +title: "OpenPOWER Foundation Announces 2016 OpenPOWER Summit, \"Revolutionizing the Datacenter\"" +date: "2015-12-01" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +**Registration Goes Live, Opens Call for Speakers and Exhibits** + +**SAN JOSE, Calif.,December 1,, 2015** — The [OpenPOWER Foundation](http://www.openpowerfoundation.org/), an open development community dedicated to accelerating datacenter innovation by taking advantage of open interfaces, reference designs, and collaboration opportunities on the POWER processor architecture, today announced its [2016 OpenPOWER Summit: Revolutionizing the Datacenter](https://openpowerfoundation.org/openpower-summit-2016/). The industry conference, open to the public at large, will be held April 5-7 at the San Jose Convention Center. Since the inaugural [OpenPOWER Summit in March 2015](https://openpowerfoundation.org/2015-summit/), the OpenPOWER Foundation membership roster has grown from 110 members to now more than 160 members worldwide collaborating on more than 100 development projects and 1,900 applications to drive innovation at all levels of the hardware and software stack. + +"We were pleased with the cross-section of attendees at our inaugural summit -- from hardware and software developers to researchers, from industry luminaries to tech influencers. We expect even greater participation this year with the unique opportunity to present your latest projects, and to collaborate, learn and contribute to creating the best, highest-value server solutions that meet a variety of datacenter needs,” said Gordon MacKean, Chairman, OpenPOWER Foundation. + +The three-day event will include presentations and an exhibitor pavilion where members will have the opportunity to unveil their OpenPOWER-based solutions and demonstrate performance breakthroughs achieved. The event will feature a keynote address by OpenPOWER’s leadership as well as a series of presentations by technical working groups, OpenPOWER members and end users. + +“Over the last year, we have seen enormous growth and advancement from our members as they have moved in their journey from rethinking the datacenter to revolutionizing it. Our members continue to introduce new innovations and expand the OpenPOWER ecosystem to meet a growing demand for more innovation from clients around the globe,” said Brad McCredie, President, OpenPOWER Foundation. “At this upcoming OpenPOWER Summit, we will showcase the next generation of cutting-edge advancements and take a look at what our members are bringing to market and are deploying in 2016. If you want to know what is going on with OpenPOWER or to contribute to the conversation, this will be the event not to be missed.” + +Calls for speakers and exhibits are open at: [https://openpowerfoundation.org/openpower-summit-2016/](https://openpowerfoundation.org/openpower-summit-2016/) + +Registration to attend the event is at: [http://www.gputechconf.com/attend/registration](http://www.gputechconf.com/attend/registration). To get the latest updates about the Summit and other OpenPOWER Foundation news, follow the Foundation on [LinkedIn](https://www.linkedin.com/groups/OpenPOWER-Foundation-7460635), [Facebook](https://www.facebook.com/openpower) or [Twitter](https://twitter.com/openpowerorg) with the #OpenPOWERSummit hashtag. + +About OpenPOWER Foundation The OpenPOWER Foundation is a global, open development membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. OpenPOWER members share expertise, investment and server-class intellectual property to develop solutions that serve the evolving needs of technology customers. + +The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. + +For further details about the OpenPOWER Foundation visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org/). + +\# # # + +Media Contact: Abby Schoffman Text100 tel: 212.871.3928 email: [abby.schoffman@text100.com](mailto:abby.schoffman@text100.com) diff --git a/content/blog/openpower-foundation-announces-2018-europe-summit-open-the-future-2.md b/content/blog/openpower-foundation-announces-2018-europe-summit-open-the-future-2.md new file mode 100644 index 0000000..2022661 --- /dev/null +++ b/content/blog/openpower-foundation-announces-2018-europe-summit-open-the-future-2.md @@ -0,0 +1,38 @@ +--- +title: "OpenPOWER Foundation Announces 2018 Europe Summit - \"Open the Future\"" +date: "2018-08-29" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +# Summit to take place Oct. 3-4, 2018 in Amsterdam + +  + +_August 29, 2018 06:30 ET_ | **Source:** OpenPOWER Foundation + +PISCATAWAY, N.J., Aug. 29, 2018 (GLOBE NEWSWIRE) -- The [OpenPOWER Foundation](https://www.globenewswire.com/Tracker?data=V_G_mk83VcomQunOy_W9mTIhDJblSDTX2rcAvZgArk4SRLLQIDGHkb4b5f0-Hm3egdSYZEAnJQK_v4JwVe4vdf8pmtaMpCuxQAbao1IgWgQ= "OpenPOWER Foundation"), an open development community dedicated to innovation for POWER platforms, announces its second [OpenPOWER Europe Summit](https://www.globenewswire.com/Tracker?data=V_G_mk83VcomQunOy_W9mepq_nlTljxgRw2NAuQfuuoJcttMQrOz-PrDtYjxjADwF6rkzbkmBAB5FvutEWO5ueJEcW6Ezm35QS6I-ib-wio= "OpenPOWER Europe Summit"). The Summit will be held Oct. 3-4, 2018, at the RAI Center in Amsterdam immediately following the Open Compute Regional Summit 2018 at the same location. + +The two-day, developer-centric event themed “Open the Future” will include: + +- Keynotes from member organizations. +- A deep technical workshop on building high-bandwidth CAPI / OpenCAPI FPGA applications. +- A Plugfest featuring OpenPOWER hardware from numerous members. +- A Hackathon focused on OpenBMC and coordinated with Open Compute. +- Over 25 technical sessions covering everything from AI/Machine Learning to Open Source Documentation, Accelerators to System Firmware and more presented by experts from the EU and beyond. +- An OpenPOWER exhibitor pavilion where members will demonstrate their latest advancements in OpenPOWER applications, platforms and research while networking with industry peers. + +“European interest in OpenPOWER continues to grow steadily, approaching 25 percent of total membership, and we’re thrilled to head to Amsterdam for our second Summit where much of the focus will be on hands-on development opportunities for both software and hardware ecosystems and innovators,” said Bryan Talik, President, OpenPOWER Foundation. “With remarkable innovation from universities, research centers, hardware and software developers and government agencies, Europe is paving the way to advance computing infrastructure, artificial intelligence, security and analytics – all while prioritizing transparency to foster a dynamic open ecosystem for development – in-synch with our focus on creating developer-centric opportunities.” + +To register for the OpenPOWER Europe Summit or find more information, please visit [https://openpowerfoundation.org/summit-2018-10-eu/](https://www.globenewswire.com/Tracker?data=xdpOTr1R-TPYZCz2y_UtmtLvGxuIwglZ0Jt2C7V8U_n4wK-oCUKVzBtlsWVMHowfrpPMvnUll1cpe_UqUsvE0t6ihlJG4A2WtAyZk22469FbboTawVCOAdQ6XjKiV-EMGCCI9-HgNyRYRPmW2gdLTj31dzIPocufADpYgTu0LK0=) or follow the Foundation on [LinkedIn](https://www.globenewswire.com/Tracker?data=GYQ5eCRyBqhdv1rJh6giQDj9gzlke4fpB_hz5ow-RNxrx-TPMFOErVem0QI1YtixGLhyIwjiav2UJFmfKFhk_JXifJXjZINnOREFwjXYszXbGg-KO3JCvA8qPZRhbE8h "LinkedIn"), [Facebook](https://www.globenewswire.com/Tracker?data=dduqh_68gPXp_kWDtSz88zF97ub_RBuE3dK1RTe706mlwRUDx1tJ-TR--AqLtxRqOW2dAqc6rEEqyQCy7_7k8w== "Facebook") or [Twitter](https://www.globenewswire.com/Tracker?data=tBqM2sFJhW1PNycldTiJd9pFV10U4j3pQUZvw8JHOpdXlcHcdQwb3V9H_jvAkGnzccYyvL4OL2hWvSdWuWG5NA== "Twitter") with the #[OpenPOWERSummit](https://www.globenewswire.com/Tracker?data=V_G_mk83VcomQunOy_W9mRbCHZjZaRnKz1tHk-WSnFHKGuS1fLHiAmGgFxGd56YifeqRv6WPz5Un-nLIFbzd5GKF49qcZvpBOmDFHwQ7I3B7XQBhnXpRZXQEXIEi8E5m "OpenPOWERSummit") hashtag. + +**About OpenPOWER Foundation** + +The OpenPOWER Foundation is an open technical community based on the POWER architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the Foundation is to create an open ecosystem built around the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers and industry. + +Founded in 2013 by Google, IBM, NVIDIA, Tyan and Mellanox, the organization has grown to 350+ members worldwide from all sectors of the High Performance Computing ecosystem. For more information on OpenPOWER Foundation, visit [www.openpowerfoundation.org](https://www.globenewswire.com/Tracker?data=R8c1OLtGuma0deIbR2TvrUm5LmuktfG2fFFxnv3qyl7P1u6a9rgZvmbxvVq2EYimLxe-K92qTuHOEh4i_lgiJc3awZ0oZfSTKZfL9D5LFZqp60lkX84LVotnZ_uUo353 "www.openpowerfoundation.org"). + +**Media Contact:** Joni Sterlacci OpenPOWER Foundation [j.sterlacci@ieee.org](https://www.globenewswire.com/Tracker?data=EqP9-sxuJKYk6C_Pq_vnvcZbKG1rkqVicPXbU_JcJjlUp0WFCocYFsEGW3mrX6tFDu2tCJRSjSpv38_NirgeAuJH51G9wW9Vja9VJHRSQu8= "j.sterlacci@ieee.org") 732-562-5464 diff --git a/content/blog/openpower-foundation-announces-developer-congress-focused-ai-driven-new-machine-learning-working-group.md b/content/blog/openpower-foundation-announces-developer-congress-focused-ai-driven-new-machine-learning-working-group.md new file mode 100644 index 0000000..4f312e2 --- /dev/null +++ b/content/blog/openpower-foundation-announces-developer-congress-focused-ai-driven-new-machine-learning-working-group.md @@ -0,0 +1,52 @@ +--- +title: "OpenPOWER Foundation Announces Developer Congress focused on AI Driven by New Machine Learning Working Group" +date: "2017-05-04" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +**Industry leaders like Red Hat continue to join OpenPOWER pushing membership to over 300 companies** + +SAN FRANCISCO, CA--(Marketwired - Apr 17, 2017) - On the wave of strong momentum around machine learning and AI in 2017, the OpenPOWER Foundation will put these innovative technologies center stage at the upcoming [OpenPOWER Foundation Developer Congress](http://ctt.marketwire.com/?release=1305021&id=11509228&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2fopenpower-developer-congress%2f), May 22-25, at the Palace Hotel in San Francisco. The conference will focus on continuing to [foster the collaboration](http://ctt.marketwire.com/?release=1305021&id=11509231&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2fblogs%2fopenpower-open-compute-data-center%2f) within the foundation to satisfy the performance demands of today's computing market. + +Developers will have the opportunity to learn and gain first-hand insights from the creators of some of the most advanced technology currently driving Deep Learning, AI and Machine Learning. Key themes will include: + +- Deep Learning, Machine Learning and Artificial Intelligence through GPU Acceleration and OpenACC. Learn the latest techniques on how to design, train and deploy neural network-powered machine learning in your applications. +- Deploy a fully optimized and supported platform for machine learning with IBM's PowerAI that supports the most popular machine learning frameworks -- Anaconda, Caffe, Chainer, TensorFlow and Theano. +- Custom Acceleration for AI through FPGAs +- Databases & Data Analytics +- Porting, Optimization, Developer Tools and Techniques +- Firmware & OpenBMC + +The Developer Congress is supported by the newly formed OpenPOWER Machine Learning Work Group (OPMLWG), an addition to the OpenPOWER Foundation community. The new group -- which includes Canonical, Cineca, Google and Mellanox, among others -- provides a forum for collaboration that will help define frameworks for the productive development and deployment of machine learning solutions using the IBM POWER architecture and OpenPOWER ecosystem technology. + +As part of the ecosystem, the OPMLWG plays a crucial role in expanding the OpenPOWER mission. It focuses on addressing the challenges machine learning project developers are continuously facing by identifying use cases, defining requirements and extracting workflows, to better understand processes with similar needs and pain points. The working group will also identify and develop technologies for the effective execution on machine learning applications by enabling hardware (HW), software (SW) and acceleration across the OpenPOWER ecosystem. + +The OPMLWG group and Developer Congress come soon after the OpenPOWER Foundation surpassed a 300-member milestone, with large players joining the fold that have developed new processes and technologies based on the OpenPOWER architecture. Some recent additions include: + +- Red Hat, which joined as a Platinum member and part of the board, adding open source leadership and expertise around community driven software innovation +- Kinetica, offers a high-performance analytics database that harnesses the power of GPUs for unprecedented performance to ingest, explore and visualize data in motion and at rest +- Bitfusion, leaders in end to end application lifecycle management and developer automation for deep learning, AI and GPUs. +- OmniSci, which offers a fast database and visual analytics platform that leverages the parallel processing power of GPUs + +"Open standards are a critical component of modern enterprise IT, and for OpenPOWER having a common set of guidelines for integration, implementation and enhanced IT security are key," said Scott Herold, senior manager, Multi-Architecture product strategy Red Hat. "Red Hat is a strong proponent of open standards across the technology stack and we are pleased to work with the OpenPOWER Foundation's various work groups in driving these standards to further enterprise choice as it relates to computing architecture." + +All OpenPOWER Members can join and work on: + +- Collection and description of use cases +- Porting, tuning and optimization of important Open Source Library / Frameworks +- Creating a ML/DL Sandbox for quick start, including example use cases, data sets and tools +- Recommending platform features for machine learning + +"OpenPOWER was founded with the goal of granting the marketplace more technology choice and the ability to rethink the approach to data centers. Today, we see the growing application of machine-learning and cognitive technology, the OpenPOWER foundation is actively supporting technical initiatives and solution development in these areas to help drive innovation and industry growth," said John Zannos, Chairman of The OpenPOWER Foundation. "The Machine Learning Work Group will focus on addressing this need for innovation, allowing technology developers and users to collaborate as they search for the solutions to the computational challenges being posed by machine learning and artificial intelligence." + +**About The OpenPOWER Foundation** [OpenPOWER Foundation](http://ctt.marketwire.com/?release=1305021&id=11509234&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2f) was founded in 2013 as an open technical membership organization enabling data centers to rethink their approach to technology. Member companies are empowered to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. At the heart of the efforts, are member offerings and solutions that can further OpenPOWER adoption, developer community engagement and a continuous effort to foster innovation in and outside the data center. + +OpenPOWER members are actively pursuing innovation and all are welcome to join in moving the state of the art of OpenPOWER systems design forward. Learn more through the [OpenPOWER Intro Video](http://ctt.marketwire.com/?release=1305021&id=11509237&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2fvideos%2fvideo-openpower%2f) and read more about OpenPOWER Ready products [here](http://ctt.marketwire.com/?release=1305021&id=11509240&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2ftechnical%2fopenpower-ready%2f). + +## CONTACT INFORMATION + +- **Media Contact:** Mark Wheeler Highwire PR [mark@highwirepr.com](mailto:mark@highwirepr.com) diff --git a/content/blog/openpower-foundation-announces-developer-congress-focused-on-ai-driven-by-new-machine-learning-working-group.md b/content/blog/openpower-foundation-announces-developer-congress-focused-on-ai-driven-by-new-machine-learning-working-group.md new file mode 100644 index 0000000..bf2de5b --- /dev/null +++ b/content/blog/openpower-foundation-announces-developer-congress-focused-on-ai-driven-by-new-machine-learning-working-group.md @@ -0,0 +1,58 @@ +--- +title: "OpenPOWER Foundation Announces Developer Congress focused on AI Driven by New Machine Learning Working Group" +date: "2017-04-17" +categories: + - "press-releases" + - "blogs" +--- + +\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern"\]\[vc\_column\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner\]\[vc\_column\_text css=".vc\_custom\_1538077716912{margin-bottom: 20px !important;}"\] + +## Industry leaders like Red Hat continue to join OpenPOWER pushing membership to over 300 companies + +\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_column\_text\] + +SAN FRANCISCO, CA–(Marketwired – Apr 17, 2017) – On the wave of strong momentum around machine learning and AI in 2017, the OpenPOWER Foundation will put these innovative technologies center stage at the upcoming [OpenPOWER Foundation Developer Congress](http://ctt.marketwire.com/?release=1305021&id=11509228&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2fopenpower-developer-congress%2f), May 22-25, at the Palace Hotel in San Francisco. The conference will focus on continuing to [foster the collaboration](http://ctt.marketwire.com/?release=1305021&id=11509231&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2fblogs%2fopenpower-open-compute-data-center%2f) within the foundation to satisfy the performance demands of today’s computing market. + +Developers will have the opportunity to learn and gain first-hand insights from the creators of some of the most advanced technology currently driving Deep Learning, AI and Machine Learning. Key themes will include: + +- Deep Learning, Machine Learning and Artificial Intelligence through GPU Acceleration and OpenACC. Learn the latest techniques on how to design, train and deploy neural network-powered machine learning in your applications. +- Deploy a fully optimized and supported platform for machine learning with IBM’s PowerAI that supports the most popular machine learning frameworks — Anaconda, Caffe, Chainer, TensorFlow and Theano. +- Custom Acceleration for AI through FPGAs +- Databases & Data Analytics +- Porting, Optimization, Developer Tools and Techniques +- Firmware & OpenBMC + +The Developer Congress is supported by the newly formed OpenPOWER Machine Learning Work Group (OPMLWG), an addition to the OpenPOWER Foundation community. The new group — which includes Canonical, Cineca, Google and Mellanox, among others — provides a forum for collaboration that will help define frameworks for the productive development and deployment of machine learning solutions using the IBM POWER architecture and OpenPOWER ecosystem technology. + +As part of the ecosystem, the OPMLWG plays a crucial role in expanding the OpenPOWER mission. It focuses on addressing the challenges machine learning project developers are continuously facing by identifying use cases, defining requirements and extracting workflows, to better understand processes with similar needs and pain points. The working group will also identify and develop technologies for the effective execution on machine learning applications by enabling hardware (HW), software (SW) and acceleration across the OpenPOWER ecosystem. + +The OPMLWG group and Developer Congress come soon after the OpenPOWER Foundation surpassed a 300-member milestone, with large players joining the fold that have developed new processes and technologies based on the OpenPOWER architecture. Some recent additions include: + +- [Red Hat](https://www.redhat.com/en), which joined as a Platinum member and part of the board, adding open source leadership and expertise around community driven software innovation +- [Kinetica](https://www.kinetica.com/), offers a high-performance analytics database that harnesses the power of GPUs for unprecedented performance to ingest, explore and visualize data in motion and at rest +- [Bitfusion](https://bitfusion.io/), leaders in end to end application lifecycle management and developer automation for deep learning, AI and GPUs. +- [OmniSci](http://www.omnisci.com), which offers a fast database and visual analytics platform that leverages the parallel processing power of GPUs + +“Open standards are a critical component of modern enterprise IT, and for OpenPOWER having a common set of guidelines for integration, implementation and enhanced IT security are key,” said Scott Herold, senior manager, Multi-Architecture product strategy Red Hat. “Red Hat is a strong proponent of open standards across the technology stack and we are pleased to work with the OpenPOWER Foundation’s various work groups in driving these standards to further enterprise choice as it relates to computing architecture.” + +All OpenPOWER Members can join and work on: + +- Collection and description of use cases +- Porting, tuning and optimization of important Open Source Library / Frameworks +- Creating a ML/DL Sandbox for quick start, including example use cases, data sets and tools +- Recommending platform features for machine learning + +“OpenPOWER was founded with the goal of granting the marketplace more technology choice and the ability to rethink the approach to data centers. Today, we see the growing application of machine-learning and cognitive technology, the OpenPOWER foundation is actively supporting technical initiatives and solution development in these areas to help drive innovation and industry growth,” said John Zannos, Chairman of The OpenPOWER Foundation. “The Machine Learning Work Group will focus on addressing this need for innovation, allowing technology developers and users to collaborate as they search for the solutions to the computational challenges being posed by machine learning and artificial intelligence.” + +**About The OpenPOWER Foundation** [OpenPOWER Foundation](http://ctt.marketwire.com/?release=1305021&id=11509234&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2f) was founded in 2013 as an open technical membership organization enabling data centers to rethink their approach to technology. Member companies are empowered to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. At the heart of the efforts, are member offerings and solutions that can further OpenPOWER adoption, developer community engagement and a continuous effort to foster innovation in and outside the data center. + +OpenPOWER members are actively pursuing innovation and all are welcome to join in moving the state of the art of OpenPOWER systems design forward. Learn more through the [OpenPOWER Intro Video](http://ctt.marketwire.com/?release=1305021&id=11509237&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2fvideos%2fvideo-openpower%2f) and read more about OpenPOWER Ready products [here](http://ctt.marketwire.com/?release=1305021&id=11509240&type=1&url=https%3a%2f%2fopenpowerfoundation.org%2ftechnical%2fopenpower-ready%2f). + +\[/vc\_column\_text\]\[vc\_column\_text css=".vc\_custom\_1538077745091{margin-top: 20px !important;}"\] + +## CONTACT INFORMATION + +- **Media Contact:** Mark Wheeler Highwire PR [mark@highwirepr.com](mailto:mark@highwirepr.com) + +\[/vc\_column\_text\]\[vc\_empty\_space\]\[/vc\_column\]\[/vc\_row\] diff --git a/content/blog/openpower-foundation-announces-first-openpower-summit.md b/content/blog/openpower-foundation-announces-first-openpower-summit.md new file mode 100644 index 0000000..a953146 --- /dev/null +++ b/content/blog/openpower-foundation-announces-first-openpower-summit.md @@ -0,0 +1,36 @@ +--- +title: "OpenPOWER Foundation Announces First OpenPOWER Summit" +date: "2014-12-08" +categories: + - "press-releases" + - "blogs" +tags: + - "openpower" +--- + +SAN JOSE, Calif., Dec. 8, 2014 /PRNewswire/ -- The [OpenPOWER Foundation](https://openpowerfoundation.org/), an open development community dedicated to accelerating data center innovation for POWER platforms, announced today its first [OpenPOWER Summit](https://openpowerfoundation.org/2015-summit/). The Summit will be held March 17-19, 2015, at the San Jose Convention Center in California. + +"In less than a year since formally establishing the organization, we've welcomed more than 75 member companies across 20 countries, formed six working groups and have begun delivering a set of technical building blocks that will help drive meaningful data center innovation," said Gordon MacKean, Chairman, OpenPOWER Foundation. "As part of our growth, we're excited to host our first OpenPOWER Summit – where we'll bring together an ecosystem of hardware and software developers, customers, academics, government agencies, industry luminaries, press and analysts to build OpenPOWER momentum." + +The three-day event will feature a keynote from OpenPOWER Chairman Gordon MacKean, member presentations, and an OpenPOWER exhibitor pavilion where members will be demonstrating their latest advancements in OpenPOWER based applications, platforms and research while networking with industry peers. + +"Through the OpenPOWER Foundation, our members are leveraging the POWER processor's open architecture to innovate across the full hardware and software stack and ultimately disrupt the scale out server space," said Brad McCredie, President, OpenPOWER Foundation. "At the OpenPOWER Summit, we'll showcase some of the most cutting-edge advancements in the open ecosystem and take a look at what's to come in the year ahead as we continue to pioneer a new era of computing defined by open collaboration, customization and innovation." + +To register to attend the OpenPOWER Summit or find more information, please visit [www.openpowerfoundation.org/2015-summit](http://www.openpowerfoundation.org/2015-summit) or follow the Foundation on [LinkedIn](https://www.linkedin.com/groups/OpenPOWER-Foundation-7460635), [Facebook](https://www.facebook.com/openpower) or [Twitter](https://twitter.com/openpowerorg) with the #OpenPOWERSummit hashtag. + +### About OpenPOWER Foundation + +The OpenPOWER Foundation is an open technical community based on the POWER Architecture, enabling collaborative development and opportunity for member differentiation and industry growth. The goal of the Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server class intellectual property to serve the evolving needs of customers and industry. + +- OpenPOWER enables collaborative innovation for shared building blocks +- OpenPOWER supports independent innovation by members +- OpenPOWER builds on industry leading technology +- OpenPOWER thrives as an open development community + +Founded in late 2013 by Google, NVIDIA, Tyan, Mellanox and IBM, the organization has grown to 75+ members worldwide from all sectors of the data center ecosystem at large. For more information on the OpenPOWER Foundation, visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org/). + +**Media Contact:** Kristin Bryson OpenPOWER Media Relations Office: 914-766-4221 Cell: 203-241-9190 Email: [kabryson@us.ibm.com](mailto:kabryson@us.ibm.com) + +Logo - [http://photos.prnewswire.com/prnh/20141208/162825LOGO](http://photos.prnewswire.com/prnh/20141208/162825LOGO) + +SOURCE OpenPOWER Foundation RELATED LINKS [https://openpowerfoundation.org](https://openpowerfoundation.org/ "Link to https://openpowerfoundation.org") diff --git a/content/blog/openpower-foundation-announces-librebmc-a-power-based-fully-open-source-bmc.md b/content/blog/openpower-foundation-announces-librebmc-a-power-based-fully-open-source-bmc.md new file mode 100644 index 0000000..f4abf79 --- /dev/null +++ b/content/blog/openpower-foundation-announces-librebmc-a-power-based-fully-open-source-bmc.md @@ -0,0 +1,36 @@ +--- +title: "OpenPOWER Foundation announces LibreBMC, a POWER-based, fully open-source BMC" +date: "2021-05-10" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "antmicro" + - "librebmc" + - "bmc" + - "open-compute-project" + - "dc-scm" + - "litex" + - "openbmc" +--- + +Baseboard management controllers (BMCs) are a mainstay in data centers. They enable remote monitoring and access to servers, and they’re responsible for the rise of “lights out management.” But from a hardware perspective, there has been little innovation in this space for years. BMC processors are built on legacy architectures that are proprietary and closed. + +The OpenPOWER Foundation is announcing a new workgroup to develop LibreBMC, the first ever baseboard management controller with completely open-source software and hardware. The processor will be based on the POWER ISA, which was open-sourced by [IBM at OpenPOWER Summit North America](https://newsroom.ibm.com/2019-08-21-IBM-Demonstrates-Commitment-to-Open-Hardware-Movement) in August, 2019. + +LibreBMC is a collaboration between OpenPOWER Foundation members Google, Antmicro, Yadro, IBM and Raptor Computing Systems. + +“The BMC is a critical component in IT infrastructure and is way past due for open collaboration and innovation,” said [James Kulina](https://www.linkedin.com/in/james-kulina/), executive director, OpenPOWER Foundation. “Moving down the stack and open sourcing technology at the silicon level is the logical next step. LibreBMC will enable improved performance, reliability, customization, and security.” + +OpenPOWER Foundation member Antmicro is developing the LibreBMC card based on [Open Compute Project’s DC-SCM specification](https://www.opencompute.org/documents/ocp-dc-scm-spec-rev-0-95-pdf). Designs are currently in process for [Lattice ECP5](https://github.com/antmicro/ecp5-dc-scm) and [Xilinx Artix-7](https://github.com/antmicro/artix-dc-scm) FPGAs. + +“We are happy to be able to contribute our experience in open source hardware, software tools and IP to LibreBMC”, said [Michael Gielda](https://www.linkedin.com/in/mgielda/), VP Business Development at [Antmicro](https://www.linkedin.com/company/antmicro-ltd/). “Open and secure server solutions allow us to bring scalable and open flows to areas ranging from AI and software to ASIC and FPGA development, and we strongly believe that our customers’ server rooms will get an open source-driven innovation boost with LibreBMC.” + +[Bill Carter](https://www.linkedin.com/in/bill-carter-3752482/), Chief Technology Officer for the [Open Compute Project Foundation](https://www.opencompute.org/), said "Speaking on behalf of the OCP community, we are excited to see OpenPOWER adopting OCP's DC-SCM standard for the new LibreBMC project, which aims to increase security and transparency of BMC controller hardware. '' + +LibreBMC will be built using completely open source tooling enabled by SymbiFlow – an open source alternative to proprietary toolchains like Xilinx Vivado and SoC, and a completely open source SoC enabled by enabled by LiteX – an open source alternative to MicroBlaze and NIOS SoC ecosystems. “We originally developed LiteX for internal needs at [Enjoy-Digita](http://www.enjoy-digital.fr/)l,” said [Florent Kermarrec](https://www.linkedin.com/in/florent-kermarrec-6428669b/?originalSubdomain=fr), maintainer of the project. “We’re glad to see it used to enable the development of new open-hardware technologies like LibreBMC.” + +Once complete, LibreBMC will run software from [OpenBMC](https://www.openbmc.org/), a Linux Foundation project for open source BMC firmware. Representatives from OpenBMC said, “it’s great to see our open source software running on open source hardware.” + +[Click here](https://openpowerfoundation.org/technical/working-groups/) to learn more about LibreBMC. If you have any questions or feedback, you can also [join our Slack workspace](https://join.slack.com/t/openpowerfoundation/shared_invite/zt-9l4fabj6-C55eMvBqAPTbzlDS1b7bzQ), or find us on [Twitter at @openpowerorg](https://twitter.com/openpowerorg)! diff --git a/content/blog/openpower-foundation-ecosystem-faq.md b/content/blog/openpower-foundation-ecosystem-faq.md new file mode 100644 index 0000000..4f1b402 --- /dev/null +++ b/content/blog/openpower-foundation-ecosystem-faq.md @@ -0,0 +1,44 @@ +--- +title: "A Few FAQs about the OpenPOWER Foundation Ecosystem" +date: "2018-11-13" +categories: + - "blogs" +tags: + - "featured" +--- + +by Jeff Scheel, IBM Distinguished Engineer, OpenPOWER Technical Steering Committee Chair + +The [OpenPOWER EU Summit in Amsterdam](https://openpowerfoundation.org/summit-2018-10-eu/) this past October gave me a chance to hear what questions remain on the minds of partners and clients.  Despite the Foundation’s best intentions and efforts, several questions are asked frequently enough that make writing a blog a great idea. So, here we go: + +**Q:           Where can I find technical documentation for OpenPOWER hardware and software?** + +**A:**           The place for finding all resources, including documents, in the OpenPOWER Foundation is the _Resource Catalog._ This page can be located by selecting _Technical_ pull-down menu at the top of the Foundation website and then scrolling down to the _Resource Catalog_ entry.  You can also access the catalog directly at [https://openpowerfoundation.org/technical/resource-catalog/](https://openpowerfoundation.org/technical/resource-catalog/). + +The OpenPOWER Foundation Resource Catalog page can be daunting to the new user because it presents search capabilities at the top, both free-form and category based, with results dynamically displayed below.  If you know the name of your document or if you know a key word such as “processor”, simply type it into the search bar.  You can also select categories to locate a document, but this may not be as effective in the short term, while we are reviewing and re-categorizing our documents. + +Once you find your document, the link will take you to a landing page that provides an overview and more details.  This page will remain unchanged as documents get revised, so feel free to bookmark your favorite documents.  Also, note that most Foundation documents are available in both HTML and PDF form.  The main link will always take you to the HTML version, but the PDF is available via the Acrobat icon (    ) in the navigation bar below the title. + +**Q:           Where can I get free access to free cloud VMs running on OpenPOWER systems?** + +**A:**           Although the OpenPOWER Foundation does not provide free resources themselves, free resources are available in the ecosystem for developers.  These resources are indicated as “Open Developer Cloud” resources () on the Developer Ecosystem maps at [http://developers.openpowerfoundation.org/explore](http://developers.openpowerfoundation.org/explore).   Key locations for just basic VMs are at Oregon State University’s Open Source Lab in Oregon, U.S.A.; the State University of Campinas (Unicamp) in Sao Paulo, Brazil; and at Brno University of Technology in Brno, Czech Republic.  Special configurations needs such as FPGAs or GPUs can be found by selecting them in the _Accelerator_ filter at the top of the page.  Free versus fee access can also be controlled under the _Access_ filter. + +**Q:           Where can I find the list of OpenPOWER Solutions?** + +**A:**           The OpenPOWER Foundation provides a list of solutions that have met the OpenPOWER Ready criteria to register their information at [https://openpowerfoundation.org/technical/openpower-ready/](https://openpowerfoundation.org/technical/openpower-ready/) (bottom of the page).  These solutions have met the _OpenPOWER Ready Definition and Criteria_ at either the [Version 1](https://openpowerfoundation.org/?resource_lib=technicalopenpower-ready) (POWER8) or [Version 2](https://openpowerfoundation.org/?resource_lib=openpower-ready-definition-criteria-v2-0) (POWER9) levels.  There are other solutions which may not be registered, but this list serves as a good starting place.  If you are looking for a specific solution which is not listed, please reach out to the provider and request information from them directly because hearing from clients and partners directly improves their understanding of demand in the market for their solutions on OpenPOWER servers.  + +If you are a solution provider, we would love to have your product in our list of solutions.  Directions for certifying applications as OpenPOWER ReadyTM can be found at the bottom of the website above. + +**Q:           Where can I find a list of applications for Linux on OpenPOWER?** + +**A:**           Besides the OpenPOWER Ready products mentioned in the previous question, the OpenPOWER Foundation does not maintain a list of all applications for Linux on OpenPOWER.  + +For open source products, a great place to start your search is the IBM Linux on Power Developer Portal _Find packages_ tool at [https://developer.ibm.com/linuxonpower/open-source-pkgs/](https://developer.ibm.com/linuxonpower/open-source-pkgs/).  This tool currently searches multiple repositories throughout the ecosystem, with a constantly expanded list of locations and packages.  + +For applications with proprietary licenses, the software vendor remains the best starting place for this information so that the company can both see your interest and provide the most accurate view of the state of the software product. + +**Q:           How can I participate in the OpenPOWER Foundation?** + +**A:**           If you and your company would like to join the Foundation, information on how to join can be found at [https://openpowerfoundation.org/membership/how-to-join/](https://openpowerfoundation.org/membership/how-to-join/).  If your company has already joined the Foundation, please join the members community by registering at [https://members.openpowerfoundation.org/user/register](https://members.openpowerfoundation.org/user/register) and then finding a work group in which to participate.  Workgroups are listed on the _View Workgroups_ link from the _OpenPOWER Foundation Members Area_ of the community.  Simply pick your group and click _Join_.  We would love to have you join us. + +Hopefully, these answers will help you get started developing for or using OpenPOWER systems.  We at the OpenPOWER Foundation, particularly those of us who participate in the Work Groups, continuously work to improve our ecosystem.  In the coming months, keep your eyes on our website as we transform it and add new collaboration features.  If there is a feature that you would like to see, reach out and let us know.  Or, better yet, join us. diff --git a/content/blog/openpower-foundation-executive-director-hugh-blemings.md b/content/blog/openpower-foundation-executive-director-hugh-blemings.md new file mode 100644 index 0000000..acd8281 --- /dev/null +++ b/content/blog/openpower-foundation-executive-director-hugh-blemings.md @@ -0,0 +1,37 @@ +--- +title: "OpenPOWER Foundation Executive Director Has a History of Tinkering with Technology" +date: "2017-11-14" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "power" + - "rackspace" + - "openpower-foundation" + - "hugh-blemings" + - "linux" +--- + +Hugh Blemings, Executive Director, OpenPOWER Foundation + +\[caption id="attachment\_5108" align="alignright" width="150"\][![Hugh Blemings, Executive Director, OpenPOWER Foundation](images/HughBlemings-20170424-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2017/11/HughBlemings-20170424.jpg) "It's fair to say that POWER has been in my blood in one way or another for nearly 20 years." - Hugh Blemings, Executive Director, OpenPOWER Foundation\[/caption\] + +It’s fair to say that POWER has been in my blood in one way or another for nearly 20 years. + +I grew up as pretty much the stereotypical “geek.” When I was eight years old, I took a clock apart, put it back together. I was inspired by and never fully recovered from this first experience tinkering with technology. + +My career has taken a number of twists and turns: + +- I began working in electronics and software design for a local electronics firm, which exposed me to various processor architectures and gave me a good sense of basic, low-level coding. +- I got into Linux and began playing with Linux on PowerPC hardware. +- While at IBM, I managed the OzLabs team that did the upstream Linux kernel port for POWER4 (this group continues doing amazing work to this day, including support for each new POWER chip). +- At Rackspace, I was fortunate to be peripherally involved in the Barrelleye project – the company’s vision for a more powerful cloud. I was heartened by the excellent benchmark results we saw with real world workloads. + +I’m excited that the latest development in my career has led me to join the OpenPOWER Foundation as its Executive Director. When the foundation was developed, I was inspired to see so many key members come together to support POWER technology. These members each play an important role in making this technology as open and widely used as possible – all while retaining the architectural superiority. + +Across the OpenPOWER Foundation membership, there is amazing work being done. From software and hardware to interconnect and cloud technologies, I’m constantly inspired by how POWER is being used. + +As Executive Director of the foundation, I intend to shine a brighter light on this work by our members. I also plan to [make it easier for developers to experience POWER technology](https://openpowerfoundation.org/blogs/meet-new-openpower-chair/), a goal I share with OpenPOWER Foundation’s chairperson Robbie Williamson. By allowing more developers to run their code on POWER – perhaps even remotely – the OpenPOWER Foundation will continue to grow. + +I’m looking forward to meeting and speaking with as many OpenPOWER Foundation members as possible in the coming weeks and months. Please find me on [LinkedIn](https://www.linkedin.com/in/hugh-blemings/) or [Twitter](https://twitter.com/hughhalf), and don’t hesitate to say hello! diff --git a/content/blog/openpower-foundation-executive-director-seeks-to-accelerate-ecosystem-growth.md b/content/blog/openpower-foundation-executive-director-seeks-to-accelerate-ecosystem-growth.md new file mode 100644 index 0000000..2a06246 --- /dev/null +++ b/content/blog/openpower-foundation-executive-director-seeks-to-accelerate-ecosystem-growth.md @@ -0,0 +1,34 @@ +--- +title: "OpenPOWER Foundation Executive Director Seeks to Accelerate Ecosystem Growth" +date: "2020-06-01" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "red-hat" + - "power-isa" + - "james-kulina" + - "hyper-sh" + - "enovance" + - "kata-containers" +--- + +By James Kulina, Executive Director, OpenPOWER Foundation + +\[caption id="attachment\_7535" align="alignleft" width="300"\]![James Kulina](images/JamesKulina_Bio_Photo-300x225.jpg) “It’s my goal to make OpenPOWER one of the easiest platforms to go from an idea to a silicon chip.” +\- James Kulina, Executive Director, OpenPOWER Foundation\[/caption\] + +Hey everyone - I’m a new face here, but I’m very excited to finally say “hello” and formally introduce myself. + +First, I want to thank [my predecessor, Hugh Blemings](https://openpowerfoundation.org/new-executive-director-selected-to-lead-openpower-foundation/). Hugh, the OpenPOWER Foundation is one of the most open and high-performance architecture and ecosystems in the industry today, and in no small part as a result of your commitment. On behalf of our members, thank you for your leadership. We’re lucky to have you continue on as an advisor to our board of directors! + +I’ve worked in open source infrastructure software throughout my career. Most recently, I was COO at Hyper.sh, an open source software startup focused on secure container technology. We co-developed and launched the [Kata Containers](https://www.openstack.org/news/view/365/kata-containers-project-launches-to-build-secure-container-infrastructure) project, one of the industry’s first hypervisor-based container runtimes, as well as contributors to other open source projects including Kubernetes, Docker, and Open Containers Initiative before a successful exit. + +Prior to that, I worked in product management focusing on OpenStack at Paris-based startup eNovance and Red Hat (which [acquired eNovance](https://www.redhat.com/en/about/press-releases/red-hat-acquire-enovance-leader-openstack-integration-services)). + +I believe the success of open source software has paved the way and set the stage for open source hardware. As Executive Director, our mission of growing an open sustainable ecosystem for the POWER Architecture and its associated technologies remains more important than ever. + +The open-sourcing of the POWER ISA last August was a seminal moment for the Foundation. My objective is to build momentum on that achievement and to accelerate the development of a more complete ecosystem and supply chain around the POWER ISA. It’s my goal to make OpenPOWER one of the easiest platforms to go from an idea to a silicon chip. + +You’ll continue to hear updates from me here on the OpenPOWER blog, but you can also find me on [LinkedIn](https://www.linkedin.com/in/james-kulina/) and [Twitter](https://twitter.com/jameskulina). We’re also opening up our OpenPOWER Foundation Slack workspace, and I’d love for all members and followers to join us there. Slack will become a forum for OpenPOWER advocates to connect and collaborate with each other. Please [submit this form](https://openpowerfoundation.org/get-involved/slack-workspace/) to receive an invite, and feel free to ask any questions you have for me in the _#Social-James Kulina AMA channel._ diff --git a/content/blog/openpower-foundation-introduces-ibm-hardware-and-software-contributions-at-openpower-summit-2020.md b/content/blog/openpower-foundation-introduces-ibm-hardware-and-software-contributions-at-openpower-summit-2020.md new file mode 100644 index 0000000..748a237 --- /dev/null +++ b/content/blog/openpower-foundation-introduces-ibm-hardware-and-software-contributions-at-openpower-summit-2020.md @@ -0,0 +1,69 @@ +--- +title: "OpenPOWER Foundation Introduces IBM Hardware and Software Contributions at OpenPOWER Summit 2020" +date: "2020-09-15" +categories: + - "blogs" +tags: + - "ibm" + - "openpower-summit" + - "openpower-foundation" + - "opencapi" + - "powerai" + - "yadro" + - "antmicro" + - "a2i" + - "openpower-summit-2020" + - "a2o-power-processor-core" + - "a2o" + - "open-ce" + - "omi" + - "open-memory-interface" + - "mendy-furmanek" + - "allan-cantle" + - "opencapi-consortium" +--- + +Today at [OpenPOWER Summit 2020](https://events.linuxfoundation.org/openpower-summit-north-america/), OpenPOWER Foundation announced two key technologies contributed by IBM to the open source community. + +- A2O POWER processor core, an out-of-order follow-up to the A2I core, and associated FPGA environment +- Open Cognitive Environment (Open-CE), based on IBM’s PowerAI to enable improved consumability of AI and deep learning frameworks + +The contributions follow the open sourcing of the [POWER ISA and associated reference designs in August 2019](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) and the [A2I POWER processor core in June 2020](https://openpowerfoundation.org/a2i-power-processor-core-contributed-to-openpower-community-to-advance-open-hardware-collaboration/). They represent IBM’s continued commitment to fostering innovation around the POWER architecture from the OpenPOWER ecosystem. + +## A2O open sourced for enhanced single-thread performance + +The A2O core is an out-of-order, multi-threaded, 64-bit POWER ISA core that was developed as a processor for customization and embedded use in system-on-chip (SoC) devices. It’s most suitable for single thread performance optimization. A follow-up to its parent high-streaming throughput A2I predecessor, it maintains the same modular design approach and fabric structure. The Auxiliary Execution Unit (AXU) is tightly-coupled to the core, enabling many possibilities for special-purpose designs for new markets tackling the challenges of modern workloads. + +Speaking of the A2O at OpenPOWER Summit 2020, [Mendy Furmanek](https://www.linkedin.com/in/mendy-furmanek-640425/), President of the OpenPOWER Foundation and Director of POWER Open Hardware Business Development at IBM, said, “I’m excited to announce the opening of the out-of-order A2O core design. A2O provides enhanced single-thread performance and is a perfect companion to the highly scalable 4-way SMT commercialized A2I core. These, combined with the ease of entry Microwatt core, do an excellent job of showcasing the versatility of the Power ISA.” + +![A2O POWER Processor Core](images/a2o-power-processor-core.png) + +![](images/a2o-power-processor-core-design.png) + +The A2O core is available on GitHub at: [https://github.com/openpower-cores/a20](https://github.com/openpower-cores/a2o) + +## IBM PowerAI open sourced as Open Cognitive Environment (Open-CE) + +Open-CE, based on IBM's PowerAI project, which was released as IBM Watson Machine Learning Community Edition, is designed to make foundational AI and deep learning frameworks, libraries and tools like TensorFlow and PyTorch more accessible. Open-CE is a source-to-image project that provides a pre-integrated, multi-architectural set of recipes, build scripts, predefined kubernetes-native continuous integration pipeline code and cutting edge models for building a complete environment of packages and container images for AI development. + +OpenPOWER member Oregon State University (OSU) also announced an intent to build and offer community binaries related to each tagged release of Open-CE in an effort to grow participation in the project and in the open source AI community as a whole. Community binaries will be offered through an easily consumable conda channel for multiple architectures, including powerpc little endian, both with and without NVIDIA CUDA support. + +OSU has a longstanding commitment to support open source software. At the [OSU Open Source Lab](https://osuosl.org/), for example, researchers develop AI-based tools to answer scientific questions and challenges. Many of the research groups leverage multiple architectures to complete their work, and multi-architecture tools like PowerAI allow them to focus on research rather than software. Open sourcing this resource will make it more accessible and enable it to move at the pace of the research community which is critical to its continued success. + +“We leverage PowerAI for all of our main AI tools across different architectures. These packages are optimized to take advantage of architecture-specific capabilities other generalized package sets cannot provide,” said [Christopher Sullivan](https://www.linkedin.com/in/christopher-m-sullivan-446904/), Assistant Director for Biocomputing at the Center for Genome Research and Biocomputing at Oregon State University. “We see a huge benefit in making this a community-managed resource under the new name Open-CE.” + +Open-CE is available on GitHub at: [https://github.com/open-ce](https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_open-2Dce&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=5jc3gglf8uSAp74sMS-Yqw&m=KJweb9zWnaGInS4oMj5-BfBtCTXIPVv2-SrMcjpr60E&s=QPdWYqu8f2Rk52JUfMsJHycpMUbHAe2OMg4n7IfDeqc&e=) + +## Allan Cantle Shares the OMI Advantage as New OpenCAPI Technical Director + +[Allan Cantle](https://www.linkedin.com/in/allan-cantle-666405/), CEO of Nallasway and recently appointed Technical Director and Board Advisor of [OpenCAPI Consortium](https://opencapi.org/) also spoke at OpenPOWER Summit 2020. Cantle previously held positions as CTO of the ISI group within Molex and CEO and Founder of Nallatech. He also has experience with OpenPOWER Foundation as a member of the Board of Directors and Chair of the OpenPOWER Accelerator workgroup from 2016 - 2018. + +“OpenCAPI is a clear leader with its ultra low latency implementation together with the complimentary Open Memory Interface (OMI),” said Cantle during his keynote at OpenPOWER Summit 2020. “OMI provides best in class memory bandwidth, together with depth, at the lowest possible cost when compared directly with native DDR4 DIMM and HBM Memories." + +OMI has a bandwidth advantage of 4x over DDR4 and 1.2x over HBM2 as well as a DRAM Depth advantage of 2.3x over DDR4 and 116x over HBMs, while only adding <10ns latency over standard RDIMMs (below). The technical performance of OpenCAPI and OMI - in production today - make it a clear choice for the industry. + +![](images/The-OMI-Advantage-1-1024x576.png) + +  + +OpenPOWER Summit is the premier gathering for developers of silicon, systems and applications built on the POWER architecture, and is sponsored by Antmicro, IBM, OpenCAPI Consortium and Yadro. More information on the A2O and OpenCE contributions, as well as other developments in the OpenPOWER ecosystem, [can be found online here](https://events.linuxfoundation.org/openpower-summit-north-america/). diff --git a/content/blog/openpower-foundation-members-bring-research-and-insight-to-nvidia-gtc-2021.md b/content/blog/openpower-foundation-members-bring-research-and-insight-to-nvidia-gtc-2021.md new file mode 100644 index 0000000..5bbd45b --- /dev/null +++ b/content/blog/openpower-foundation-members-bring-research-and-insight-to-nvidia-gtc-2021.md @@ -0,0 +1,93 @@ +--- +title: "OpenPOWER Foundation Members Bring Research and Insight to NVIDIA GTC 2021" +date: "2021-04-09" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "nvidia" + - "openpower-foundation" + - "ohio-state-university" + - "oak-ridge-national-laboratory" + - "gtc" + - "gtc-2021" + - "nvidia-gtc" + - "cineca" +--- + +_By_ [_Ganesan Narayanasamy_](https://www.linkedin.com/in/ganesannarayanasamy/)_, Leader, OpenPOWER Academic Discussion Group and IBM POWER enablement_ + +Academic and research organizations have always been leaders in pushing the boundaries of science and technology, and work closely with companies to solve some of the world’s biggest challenges. As the leader of the OpenPOWER Academic Discussion Group, I believe working with academics and research centers to develop and adopt POWER/OpenPOWER Systems is a key to growing our ecosystem. + +Summits and conferences throughout the year provide an opportunity for technology leaders to discuss the latest advances in technology and better understand remarkable pieces of hardware. One of the conferences I look forward to each year is NVIDIA’s GTC - taking place April 12-16, 2021. + +NVIDIA’s GTC brings together a global community of developers, researchers, engineers, and innovators to experience global innovation and collaboration. The event delivers the latest breakthroughs in AI, HPC, accelerated data science, healthcare, graphics, government and more. It’s ranked by some as the #1 AI Conference and this year's registration is FREE for the virtual event. + +If you’re attending GTC 2021, below are a handful of sessions you’ll be interested in from OpenPOWER Foundation members IBM, Oak Ridge National Laboratory, CINECA and Ohio State University. + +If there's a GTC session not included here that you think would be valuable to the OpenPOWER community, we want to know about it! Share them with us on Twitter at [@openpowerorg](https://twitter.com/openpowerorg) and [@ganesanblue](https://twitter.com/GanesanBlue). + +**Aerodynamic Flow Control Simulations with Many GPUs on the Summit Supercomputer** + +- Session ID S3123 +- Nicholson Koukpaizan, Postdoctoral Research Associate, Oak Ridge National Laboratory +- A GPU-accelerated computational fluid dynamics (CFD) solver was used for aerodynamic flow control simulations on the Summit supercomputer at Oak Ridge Leadership Computing Facility. The GPU implementation of the FORTRAN 90 code relies on OpenACC directives to offload work to the GPU and message-passing interface (MPI) for multi-core/multi-device parallelism, taking advantage of the CUDA-aware MPI capability. We'll address implementation details, as well as performance results and optimization. Finally, we'll present new scientific results obtained by leveraging the GPUs for the control of aerodynamic flow separation using fluidic oscillators (actuators that generate spatially oscillating jets without any moving part). We'll add a few details pertaining to CFD and aerodynamic flow control to make the talk accessible to people who are not necessarily familiar with these domains. + +**Fluid Dynamic Simulations of Euplectella aspergillum Sponge** + +- Session ID E31218 +- Giorgio Amati, Senior HPC engineering, CINECA +- We present our experience in simulating the flow around a silica-based sponge, the "Euplectella Aspergillum," using a TOP500 machine equipped with NVIDIA GPUs. A Lattice Boltzmann Method (LBM)-based code was used to explore fluid dynamical features of this complex structure. We'll present some physical results, together with details of code implementations and performance figures (up to about 4,000 V100 GPU) for our MPI+OpenACC LBM code. + +**Accelerating GPU-Enabled HPC and Data Science Applications with On-the-Fly Compression** + +- Session ID S31664 +- Dhabaleswar K (DK) Panda, Professor and University Distinguished Scholar, Ohio State University +- We'll discuss the effectiveness of using high-performance GPU-based compression algorithms to improve large message transfers for GPU-resident data in MPI libraries. We'll discuss the performance bottleneck of transferring large GPU-resident data, which is due to the relatively low throughput of the commodity networks (such as Ethernet and InfiniBand). We'll provide an overview of the proposed on-the-fly message compression schemes in CUDA-Aware MPI libraries, like MVAPICH2-GDR, to reduce communication volume. We'll highlight the challenges of integrating compression algorithms into MPI libraries and discuss optimization strategies. We'll use the popular OSU micro-benchmark suite and representative applications from HPC and data science to demonstrate the efficiency of the proposed solutions. The experimental evaluations show that we can gain up to 37% improvement in execution time of AWP-ODC and 2.86x improvement in Dask throughput. + +**Introducing Cloud-Native Supercomputing: Bare-Metal, Secured Supercomputing Architecture** + +- Session ID S32021 +- Gilad Shainer, SVP Marketing, Networking, NVIDIA, Dhabaleswar K (DK) Panda, Professor and University Distinguished Scholar, Ohio State University and Paul Calleja, Director, Research Computing Services, University Of Cambridge +- High performance computing and artificial intelligence supercomputers have evolved to be the primary data processing engines for wide commercial use, hosting a variety of users and applications. While providing the highest possible performance, supercomputers must also offer multi-tenancy security. Therefore they need to be designed as cloud-native supercomputing platforms. The key element that enables this architecture transition is the data processing unit (DPU). DPU is a fully integrated data-center-on-a-chip platform that can manage the data center infrastructure instead of the host processor, enabling security and orchestration of the supercomputer. This architecture enables supercomputing platforms to deliver optimal bare-metal performance, while natively supporting multi-node tenant isolation. We'll introduce the new supercomputing architecture, and include first applications performance results. + +**Optimizing Communication on GPU-Based HPC Systems for Dask and cuML Using MVAPICH2-GDR** + +- Session ID S31627 +- Dhabaleswar K (DK) Panda, Professor and University Distinguished Scholar, Ohio State University and Aamir Shafi, Research Scientist, Ohio State University +- Dask and cuML are important components of the NVIDIA RAPIDS framework capable of executing in the Multi-Node Multi-GPU setting on a cluster of GPUs connected with an RDMA-capable interconnect like InfiniBand. The MVAPICH2-GDR library is a high-performance implementation of the MPI standard for programming such systems. We'll present our approach to architecting MVAPICH2-GDR-based communication backends for Dask and cuML. The backend for Dask exploits mpi4py over MVAPICH2-GDR and supports communication using asynchronous I/O communication co-routines. We'll present performance evaluation results from multiple HPC clusters (OSU cluster with v100 GPUs, TACC’s Frontera with 32 NVIDIA Quadro RTX 5000 GPUs, and SDSC’s Comet with 32 NVIDIA P100 GPUs) and demonstrate the efficiency of MPI-based backends using micro-benchmark results and applications like sum of cuPy array with transpose, cuDF merge, K-Means, Nearest Neighbors, Random Forest, and tSVD. + +**High Performance Scalable Distributed Deep Learning with MVAPICH2-GDR** + +- Session ID S31646 +- Dhabaleswar K (DK) Panda, Professor and University Distinguished Scholar, Ohio State University and Hari Subramoni, Research Scientist, Ohio State University +- We'll highlight recent advances in AI and HPC technologies to improve the performance of deep neural network training (DNN) on NVIDIA GPUs. We'll discuss many exciting challenges and opportunities for HPC and AI researchers. Traditionally, DL frameworks have utilized a single GPU to accelerate the performance of DNN training/inference. However, approaches to parallelize training are being actively explored. Several DL frameworks, such as TensorFlow, have emerged that offer ease-of-use and flexibility to train complex DNNs. We'll will provide an overview of interesting trends in DL frameworks from an architectural/performance standpoint, and evaluate new high-level distributed frameworks like DeepSpeed and Horovod. We'll highlight new challenges for message-passing interface runtimes to efficiently support DNN training, and will discuss different parallelization strategies for distributed training. Finally, we scale DNN training for very-large pathology images using model-parallelism to 1,024 NVIDIA V100 GPUs. + +**300 Years In the Making: IBM is Solving Big Problems by Revitalizing Old Methods with New Technology** + +- Session ID SS33244 +- Matt Drahzal, Worldwide Business Development - Cognitive Systems, IBM Systems +- In the 1700s, Thomas Bayes created a now widely-known and relatively simple method for finding the probability of one event happening. Nearly 300 years later, NVIDIA GPUs made it possible to apply this theorem to the exploding field of high-performance computing. In this session, Matt Drahzal will share how IBM and NVIDIA collaborated to create an appliance for HPC clusters that will make workloads run faster and generate better results. Application areas that are already seeing strong results from the IBM Bayesian Optimization Accelerator include automotive, aerospace, electronic design and oil & gas. + +**To the Edge with the Mayflower Autonomous Ship** + +- Session ID: SS33194 +- Andy Stanford-Clark, Chief Technology Officer, IBM UK/Ireland +- Learn how the Mayflower Autonomous Ship (MAS) will self-navigate across oceans, run operations 24/7, and collect and analyze large amounts of real-time data on climate and ocean health with its AI Captain powered by IBM Automation. The MAS is led by the marine research organization, ProMare, with IBM acting as both lead technology partner and lead scientific partner for the project. To enable accelerated discovery that wasn't possible before, the MAS will run AI & Automation workloads on NVIDIA Jetson AGX Xavier Edge devices. MAS represents a new class of efficient, crewless, and solar-powered ships, which combines new and time-tested automation technologies. + +**Working Together: Four things a data scientist should demand from enterprise IT** + +- Session ID: SS33231 +- Douglas O'Flaherty, Program Director, IBM Storage +- As data science matures in an organization, there are new demands on the data science teams that are often unfamiliar. Rather than seeing the need to work with enterprise IT as an inhibitor to innovation, the data science teams should be asking the enterprise IT teams to actively apply their expertise to support the data science mission. Enhancements in storage, intelligent data management, and hybrid cloud make it easier to scale data science productivity and collaboration - if you know what to ask for. In this session, you will hear from IBM's leader in enterprise data science, Steve Eliuk, as he describes the scale and the challenges of building their enterprise data science environment. And, how they are leveraging IBM's tools and platforms to manage that growth. + +**IBM + NVIDIA for Accelerated, High-Performing and Secure ADAS/AV Development** + +- Session ID: SS33232 +- Frank Kraemer, IBM Systems Architect +- The automobile is one of the most technically sophisticated and connected platforms on the planet. Building advanced driver assistance systems (ADAS) and autonomous vehicles (AV) combines miles and miles of data, consumer behavior, and simulation together. Simple access to data for deep learning AI, development, and system testing is required. These requirements create a growing need for hybrid cloud data management. To become industry leaders, ADAS/AV developers need a high-performance, accelerated, scalable information architecture which will help them deliver insights faster. NVIDIA and IBM Storage have been working together for many years to develop reference architectures, to deliver accelerated, high-performing, scalable and secure end to end AI infrastructure for ADAS/AV. + +  + +[Click here to learn more and register for GTC 2021!](https://www.nvidia.com/en-us/gtc/) diff --git a/content/blog/openpower-foundation-members-collaborate-on-liquid-cooling-for-hpc.md b/content/blog/openpower-foundation-members-collaborate-on-liquid-cooling-for-hpc.md new file mode 100644 index 0000000..c75889d --- /dev/null +++ b/content/blog/openpower-foundation-members-collaborate-on-liquid-cooling-for-hpc.md @@ -0,0 +1,22 @@ +--- +title: "OpenPOWER Foundation Members Collaborate on Liquid Cooling for HPC" +date: "2019-03-06" +categories: + - "blogs" +--- + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM + +![](images/singapire-300x225.jpg) + +New to the OpenPOWER Foundation, [Open Computing Singapore](https://opencomputing.sg/) provides High Performance Computing (HPC) products and solutions for the power architecture. They work closely with data centers, government agencies and enterprises across the Asia Pacific region. + +Through joint research collaboration, Open Compute Singapore and fellow OpenPOWER Foundation member [National University of Singapore’s School of Engineering - Centre for Energy Research & Technology (CERT)](https://www.eng.nus.edu.sg/cert/) developed a world-class liquid cooling system used for data centers and HPC servers. [Liquid cooling has been on the rise](https://www.datacenterknowledge.com/power-and-cooling/five-reasons-data-center-liquid-cooling-rise) as workloads have increased, particularly due to artificial intelligence and machine learning applications. Open Computing Singapore’s liquid cooling system has several applications including: + +- Monitoring mission-critical Data Center temperatures +- Air-flow management +- Sensor detections for water leakage sending out alerts and notifications + +  + +What’s up next for Open Compute Singapore? Check them out at this year’s [SC Asia 2019.](https://www.sc-asia.org/) diff --git a/content/blog/openpower-foundation-members-help-combat-covid-19.md b/content/blog/openpower-foundation-members-help-combat-covid-19.md new file mode 100644 index 0000000..61ee6f1 --- /dev/null +++ b/content/blog/openpower-foundation-members-help-combat-covid-19.md @@ -0,0 +1,55 @@ +--- +title: "OpenPOWER Foundation Members Help Combat COVID-19" +date: "2020-03-27" +categories: + - "blogs" +tags: + - "supercomputer" + - "summit" + - "covid-19" + - "coronavirus" +--- + +Hugh Blemings, Executive Director, OpenPOWER Foundation + +First, the thoughts of all involved in OpenPOWER and the Foundation are with the broader community and their families in these trying and uncertain times. Please stay safe out there. + +As the Executive Director of the OpenPOWER Foundation, I have had the pleasure for years of working with folks across the globe making technological advances throughout the ecosystem. We as an organization have never felt so proud to be part of the computing community as we are at this moment. + +Companies and individuals, including OpenPOWER members, are stepping up to help the world fight COVID-19, partnering with governments and nonprofit organizations to make a difference. + +Below are just some of many inspiring examples of how OpenPOWER Foundation members are collaborating together in a time of need. We’d love to learn more about other examples - please share anything we’ve missed in this recap in the comments below or on Twitter at [@hughhalf](https://twitter.com/hughhalf) or [@openpowerorg](https://twitter.com/OpenPOWERorg). We're of course also happy to continue this conversation in one of our upcoming [Virtual Coffee Calls](https://openpowerfoundation.org/openpower-virtual-coffee-calls/), which we’ll be using to connect, learn from each other and stay in touch. + +**IBM** + +In collaboration with the White House Office of Science and Technology Policy and the U.S. Department of Energy and many others, [IBM is helping launch the COVID-19 High Performance Computing Consortium](https://newsroom.ibm.com/IBM-helps-bring-supercomputers-into-the-global-fight-against-COVID-19?utm_medium=OSocial&utm_source=Twitter&utm_content=000033TP&cm_mmc=OSocial_Twitter-_-IBM+Master+Brand_Communications-_-WW_WW&cm_mmca1=000033TP&social_post=3217656146&linkId=84819600), which will bring forth an unprecedented amount of computing power—16 systems with more than 330 petaflops, 775,000 CPU cores, 34,000 GPUs, and counting — to help researchers everywhere better understand COVID-19, its treatments and potential cures. + +Two critically important applications of this supercomputing capacity could include new potential therapies as well as a possible vaccine and developing predictive models to assess how the disease is progressing. The consortium will collaborate on reviewing proposals from researchers worldwide, making supercomputing resources available to projects that can make the most immediate impact, and providing technical assistance to researchers utilizing the systems. + +**Oak Ridge National Laboratory** + +Researchers at Oak Ridge National Laboratory have used Summit, the world’s most powerful supercomputer, to [identify 77 small-molecule drug compounds](https://www.ornl.gov/news/early-research-existing-drug-compounds-supercomputing-could-combat-coronavirus) that could warrant further study in the fight against the COVID-19 outbreak. More than [8,000 compounds were simulated](https://onezero.medium.com/the-worlds-most-powerful-supercomputer-has-entered-the-fight-against-coronavirus-3e98c4d67459) to screen for those behind the main “spike” protein of the coronavirus. + + + +Oak Ridge is also a leading member of the COVID-19 High Performance Computing Consortium, contributing 200 petaflops and 4608 POWER9 nodes to the cause. + +**Lawrence Livermore National Laboratory** + +Lawrence Livermore National Laboratory scientists are combining artificial intelligence, bioinformatics and supercomputing to help [discover candidates for new antibodies and pharmaceutical drugs](https://www.llnl.gov/news/lab-antibody-anti-viral-research-aids-covid-19-response) to combat COVID-19. + + + +Lawrence Livermore - home to Sierra, the POWER-based second-most powerful supercomputer in the world - is also a member of the COVID-19 High Performance Computing Consortium, helping to contribute 31.7 petaflops and 7,001 nodes in partnership with Los Alamos National Laboratory and Sandia National Laboratories. + +**Nimbix** + +Nimbix is collaborating with others in the technology industry to support researchers and healthcare workers with the computational horsepower needed in their fight to stop the pandemic. You are [welcome to apply](https://www.nimbix.net/covid-compute-research-support) for complimentary compute resources from Nimbix if you’re a problem solver at the forefront of COVID-19 discovery efforts. + +**NVIDIA** + +[NVIDIA is providing a free 90-day license to Parabricks](https://blogs.nvidia.com/blog/2020/03/19/coronavirus-research-parabricks/) to any researcher in the worldwide effort to fight the coronavirus. Parabricks uses GPUs to accelerate the analysis of sequence data by as much as 50x. Given the unprecedented spread of the virus, the acceleration of sequencing time could have an enormous positive impact. [Please apply](https://www.nvidia.com/en-us/docs/nvidia-parabricks-researchers/) to access NVIDIA Parabricks. + +**Exscalate4CoV (E4C)** + +OpenPOWER Foundation members [CINECA](https://www.cineca.it/en), [Barcelona Supercomputing Center](https://www.bsc.es/) and [Jülich Supercomputing Centre](https://www.fz-juelich.de/ias/jsc/EN/Home/home_node.html) are each participating in the Italian-based E4C consortium working on research projects to better and more quickly face [pandemic situations such as coronavirus](https://www.hpcwire.com/off-the-wire/exscalate4cov-awarded-e3m-eu-call-to-combat-coronavirus/). All three will perform molecular dynamics simulations of viral proteins. diff --git a/content/blog/openpower-foundation-momentum-leads-greater-community-gains-2017-2.md b/content/blog/openpower-foundation-momentum-leads-greater-community-gains-2017-2.md new file mode 100644 index 0000000..3be7723 --- /dev/null +++ b/content/blog/openpower-foundation-momentum-leads-greater-community-gains-2017-2.md @@ -0,0 +1,26 @@ +--- +title: "OpenPOWER Foundation Momentum Leads to Greater Community Gains in 2017" +date: "2017-02-21" +categories: + - "blogs" +--- + +Dear OpenPOWER Members, + + The OpenPOWER Foundation had a great 2016.  That momentum will lead us to greater community gains in 2017. + + In 2016, our membership rose to nearly 300 members and continues to grow at a healthy pace. We introduced a new membership level for ISVs that is growing steadily as software providers are seeing the benefits of innovating on the OpenPOWER platform and engaging in our work activity. The Foundation currently has 13 technical work groups and multiple committees for you to contribute to with more on the horizon focusing on various application areas. + + The Foundation’s continued success saw over 400 people attend the second annual Summit held in San Jose in March. With 80 member presentations, 20 member exhibits and meaningful dialogue with both enterprise customers and ecosystem members, the OpenPOWER message has been heard. In June we hosted our second annual China Summit in Beijing with local government and industry both contributing and attending furthering OpenPOWER activity in the region. And in October, we hosted our first European Summit in Barcelona engaging directly with over 200 European members and local industries that generated meaningful dialogue focused on European solutions. + + 2016 also witnessed the formation of a number of new OpenPOWER initiatives, including an Ambassador Program to expand our reach, a Developer Challenge with over 300 participants, and a developer system installed at the University of Munich for academia and industry to use freely for innovation. + + For 2017 the OpenPOWER Board approved four areas of focus that include machine learning/AI, database and analytics, cloud applications and containers. The strategy for 2017 includes expanding membership engagement, membership development and membership testimonials and experiences. + + 2017 will bring our Foundation many opportunities and advancements. We plan to build upon our Summit events by adding Developer Congresses. We look to extend our reach worldwide with our growing Ambassador program. We will promote technical innovations at various academic labs and in industry. We will host our second Developer Challenge that promotes creative innovation and solutions from students and industry. We plan to open additional application-oriented work groups to further technical solutions that benefits specific application areas. + + As our community reaches 300+ membership, I encourage everyone to get involved. Engage in our working groups and committees. Promote your solutions via Summit presentations and demos and testimonials to the Marketing Committee. Volunteer your time as an Ambassador or as a technical liaison at the Developer Congress.   And last but not least, make sure your solution is listed as OpenPOWER Ready. + +We look forward to 2017 and the opportunities it will bring.  The Board would like to personally THANK YOU for everything you have done to improve the OpenPOWER Foundation. + +_John Zannos, Canonical                                                                                Bryan Talik, IBM_ OpenPOWER Board Chair                                                        OpenPOWER Board President diff --git a/content/blog/openpower-foundation-momentum-leads-greater-community-gains-2017__trashed.md b/content/blog/openpower-foundation-momentum-leads-greater-community-gains-2017__trashed.md new file mode 100644 index 0000000..583e63e --- /dev/null +++ b/content/blog/openpower-foundation-momentum-leads-greater-community-gains-2017__trashed.md @@ -0,0 +1,28 @@ +--- +title: "OpenPOWER Foundation Momentum Leads to Greater Community Gains in 2017" +date: "2017-02-21" +categories: + - "blogs" +--- + +Dear OpenPOWER Members, + + The OpenPOWER Foundation had a great 2016.  That momentum will lead us to greater community gains in 2017. + + In 2016, our membership rose to nearly 300 members and continues to grow at a healthy pace. We introduced a new membership level for ISVs that is growing steadily as software providers are seeing the benefits of innovating on the OpenPOWER platform and engaging in our work activity. The Foundation currently has 13 technical work groups and multiple committees for you to contribute to with more on the horizon focusing on various application areas. + + The Foundation’s continued success saw over 400 people attend the second annual Summit held in San Jose in March. With 80 member presentations, 20 member exhibits and meaningful dialogue with both enterprise customers and ecosystem members, the OpenPOWER message has been heard. In June we hosted our second annual China Summit in Beijing with local government and industry both contributing and attending furthering OpenPOWER activity in the region. And in October, we hosted our first European Summit in Barcelona engaging directly with over 200 European members and local industries that generated meaningful dialogue focused on European solutions. + + 2016 also witnessed the formation of a number of new OpenPOWER initiatives, including an Ambassador Program to expand our reach, a Developer Challenge with over 300 participants, and a developer system installed at the University of Munich for academia and industry to use freely for innovation. + + For 2017 the OpenPOWER Board approved four areas of focus that include machine learning/AI, database and analytics, cloud applications and containers. The strategy for 2017 includes expanding membership engagement, membership development and membership testimonials and experiences. + + 2017 will bring our Foundation many opportunities and advancements. We plan to build upon our Summit events by adding Developer Congresses. We look to extend our reach worldwide with our growing Ambassador program. We will promote technical innovations at various academic labs and in industry. We will host our second Developer Challenge that promotes creative innovation and solutions from students and industry. We plan to open additional application-oriented work groups to further technical solutions that benefits specific application areas. + + As our community reaches 300+ membership, I encourage everyone to get involved. Engage in our working groups and committees. Promote your solutions via Summit presentations and demos and testimonials to the Marketing Committee. Volunteer your time as an Ambassador or as a technical liaison at the Developer Congress.   And last but not least, make sure your solution is listed as OpenPOWER Ready. + +We look forward to 2017 and the opportunities it will bring.  The Board would like to personally THANK YOU for everything you have done to improve the OpenPOWER Foundation. + +_John Zannos, Canonical                                                                                Bryan Talik,IBM_ + +OpenPOWER Board Chair                                                        OpenPOWER Board President diff --git a/content/blog/openpower-foundation-names-new-board-leadership-to-propel-power-architecture.md b/content/blog/openpower-foundation-names-new-board-leadership-to-propel-power-architecture.md new file mode 100644 index 0000000..fd95508 --- /dev/null +++ b/content/blog/openpower-foundation-names-new-board-leadership-to-propel-power-architecture.md @@ -0,0 +1,39 @@ +--- +title: "OpenPOWER Foundation Names New Board Leadership to Propel POWER Architecture" +date: "2016-01-05" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +### John Zannos of Canonical elected Chair; Calista Redmond of IBM elected President + +**PISCATAWAY, NJ, Jan. 5, 2016** – The OpenPOWER Foundation today announced the election of John Zannos, Canonical, Chair, and Calista Redmond, IBM, President of the OpenPOWER Foundation Board of Directors, effective January 1, 2016. Zannos and Redmond bring deep knowledge of the open technology development community and intimate familiarity with the Foundation’s core mission, with both playing key roles within the Foundation since 2014. The new leadership will continue to guide the proliferation of OpenPOWER-based technology solutions built on IBM’s POWER architecture in today’s datacenters. + +Former Chair, Gordon MacKean of Google, and President, Brad McCredie of IBM, will remain close advisors to the OpenPOWER Foundation, serving as non-voting Board Advisors to provide technical and strategic roadmap guidance as appropriate. + +**Driving OpenPOWER Momentum and Vision** With newly elected officers from Canonical, a leader in enabling and growing open source software applications, and IBM, founding member and provider of the hardware cornerstone of the Foundation, OpenPOWER will continue under a dual leadership model. John and Calista’s combined areas of expertise will enable the Foundation to continue toward their end-to-end strategic vision, promoting engagement across the entire Board, Technical Steering Committee and membership as a whole. + +**John Zanno**s has served as a Director on the OpenPOWER Foundation Board since 2014 while also serving as Vice President of Worldwide Alliances, Business Development and Cloud Platforms at Canonical. He has a rich history of supporting open and collaborative communities, having served on the Boards of the OpenStack, OPNFV and other open source foundations. His interests are to ensure an open and thriving technology ecosystem that drives innovation and collaboration. During his career John has lead business and technology efforts around cloud and computing platforms at HP, Compaq, Digital Equipment Corporation, ARCO and the US EPA. + +**Calista Redmond** has served as both a delegate on the OpenPOWER Foundation Board and Director for OpenPOWER Global Alliances at IBM since 2014. During that time, she has been deeply engrained in the Foundation, driving OpenPOWER growth, building strategic relationships with members and cultivating new ecosystem opportunities across the OpenPOWER community. Prior to her work with OpenPOWER, Redmond helped form the OpenDaylight Platform and has engaged with the Linux Foundation and Open Compute Project on various initiatives. + +“Over the last two years, the OpenPOWER Foundation and our 175 members have created an ecosystem that thrives on innovation. Working together on POWER’s open architecture has enabled us to customize across the stack and create new solutions that wouldn’t be possible without this level of cross-industry collaboration,” said Gordon MacKean, former Chair of and current Advisor to the Board, OpenPOWER Foundation, and current Senior Director, Hardware Platforms, Google. MacKean welcomed the new leadership stating, “With a strong passion and commitment to open technology, our new leaders will continue to guide the Foundation’s underlying mission, bringing the industry to foster collaboration and develop advanced technology solutions that solve the evolving needs of today’s datacenter customers.” + +**A History of Success** During the OpenPOWER Foundation’s first two years, MacKean and McCredie played instrumental roles in forming the essential building blocks of the organization, growing the Foundation from five founding members to more than 170 members in more than 22 countries. Under their guidance, these members have worked collaboratively on hundreds of innovations and brought more than 25 solutions to market, including several from members TYAN, Suzhou PowerCore, IBM, Canonical, Redis Labs, Nallatech, Convey and Alpha-Data. + +Additionally, Foundation members have formed 11 Work Groups, including an Accelerator Work Group, which has taken IBM’s Coherent Accelerator Processor Interface (CAPI) technology and published an open specification so members can enable system designers and programmers to leverage CAPI on POWER. + +“The OpenPOWER Foundation has moved from rethinking the datacenter to revolutionizing it, embracing openness across both hardware and software,” said OpenPOWER Foundation Chair, John Zannos. “As the OpenPOWER Foundation enters its third year and next phase of evolution, we will continue to expand the ecosystem of hardware and software providers committed to openness and innovation on the POWER platform. There is enormous need and potential to change the datacenter and deliver unprecedented choice and value to the industry, and I look forward to helping drive this change in the coming year.” + +**Momentum Continues with 2016 OpenPOWER Summit** The OpenPOWER Foundation will host its 2016 OpenPOWER Summit: Revolutionizing the Datacenter at the San Jose Convention Center April 5-7. Open to the public, the event will feature keynote presentations from technology industry leaders, member presentations and an exhibitor pavilion. To register, exhibit, present or learn more, visit https://openpowerfoundation.org/openpower-summit-2016/. + +**About OpenPOWER Foundation** The OpenPOWER Foundation is a global, open development membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. OpenPOWER members share expertise, investment and server-class intellectual property to develop solutions that serve the evolving needs of technology customers. + +The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. + +For further details visit [www.openpowerfoundation.org](https://openpowerfoundation.org/). + +\# # # Media Contact Abby Schoffman, Text100 212.871.3928 abby.schoffman@text100.com diff --git a/content/blog/openpower-foundation-provides-microwatt-for-fabrication-on-skywater-open-pdk-shuttle.md b/content/blog/openpower-foundation-provides-microwatt-for-fabrication-on-skywater-open-pdk-shuttle.md new file mode 100644 index 0000000..c938e64 --- /dev/null +++ b/content/blog/openpower-foundation-provides-microwatt-for-fabrication-on-skywater-open-pdk-shuttle.md @@ -0,0 +1,30 @@ +--- +title: "OpenPOWER Foundation Provides Microwatt for Fabrication on Skywater Open PDK Shuttle" +date: "2021-03-22" +categories: + - "blogs" +tags: + - "openpower" + - "google" + - "openpower-foundation" + - "microwatt" + - "james-kulina" + - "efabless" + - "skywater" + - "tim-ansell" + - "skywater-open-pdk-shuttle" +--- + +The OpenPOWER based Microwatt CPU core has been selected to be included in the [Efabless Open MPW Shuttle Program](https://www.efabless.com/open_shuttle_program). Microwatt’s inclusion in the program represents a lower barrier to entry for chip manufacturing. It also demonstrates the ability to create fully designed, fabricated chips relying on a complete, end-to-end open source environment - including open governance, specifications, tooling, IP, hardware, software and manufacturing. + +The Efabless Open MPW Shuttle Program provides fabrication for fully open-source projects using the [SkyWater Open Source PDK](https://www.skywatertechnology.com/press-releases/google-partners-with-skywater-and-efabless-to-enable-open-source-manufacturing-of-custom-asics/). It’s sponsored by Google and allows designers to experiment and test innovative designs with lower risk and fabrication costs. + +“Chip fabrication has essentially always been done in closed, proprietary environments, with incredibly prohibitive costs and risks associated with it,” said Tim Ansell, software engineer, Google. “SKY130 is the industry’s first open source foundry process design kit, and fabricating a completely open source processor like Microwatt showcases how much progress we’ve made in open source hardware.” + +Ansell helped develop a fully open sourced Process Development Kit (PDK) in partnership with Skywater in 2020, and joined the [OpenPOWER Foundation Board of Directors](https://openpowerfoundation.org/about-us/board-of-directors/) in January 2021 to bring software development techniques and practices into the world of open source hardware. + +Microwatt is a small, simple CPU core written in VHDL 2008 designed as a proof of concept when the POWER ISA was open sourced at [OpenPOWER Summit North America 2019](https://openpowerfoundation.org/openpower-summit-north-america-2019-introducing-the-microwatt-fpga-soft-cpu-core/). Since then, it has grown to support Micropython, Zephyr and Linux. + +“The OpenPOWER Foundation is thrilled to be participating in the first fully open sourced shuttle program,” said James Kulina, Executive Director, OpenPOWER Foundation. “This new open approach for chip fabrication has the potential to change the innovation curve for the semiconductor industry, providing easier access to trial and test new ideas rapidly at lower costs.” + +Stay tuned on our blog for more information on Microwatt and the Skywater Open PDK Shuttle program. To discuss the project further, please join the [OpenPOWER Foundation Slack](https://join.slack.com/t/openpowerfoundation/shared_invite/zt-9l4fabj6-C55eMvBqAPTbzlDS1b7bzQ) workspace or reach out to [James Kulina](https://twitter.com/jameskulina) or [Tim Ansell](https://twitter.com/mithro) on Twitter. diff --git a/content/blog/openpower-foundation-reveals-new-servers-and-big-data-analytics-innovations.md b/content/blog/openpower-foundation-reveals-new-servers-and-big-data-analytics-innovations.md new file mode 100644 index 0000000..6761cb3 --- /dev/null +++ b/content/blog/openpower-foundation-reveals-new-servers-and-big-data-analytics-innovations.md @@ -0,0 +1,34 @@ +--- +title: "OpenPOWER Foundation Reveals New Servers and Big Data Analytics Innovations" +date: "2016-04-06" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +### Foundation Membership Surpasses 200, Members Showcase More Than 50 Innovations at Summit + +**OPENPOWER SUMMIT, San Jose, Calif. – April 6, 2016 –** The [OpenPOWER Foundation](http://www.openpowerfoundation.org) today revealed more than 50 new infrastructure and software innovations, spanning the entire system stack, including systems, boards, cards and accelerators. Unveiled at the second annual [OpenPOWER Summit](https://openpowerfoundation.org/openpower-summit-2016/), these new products build upon 30 OpenPOWER-based solutions already in the marketplace. + +“To meet the demands of today’s data centers, businesses need open system design that provides greater flexibility and speed at a lower cost,” said Calista Redmond, President of the OpenPOWER Foundation and Director of OpenPOWER Global Alliances, IBM. “The innovations introduced today demonstrate OpenPOWER members’ commitment to building technology infrastructures that provide customers with more choice, allowing them to leverage increased data workloads and analytics to drive better business outcomes.” + +The OpenPOWER Foundation has rapidly grown to more than 200 businesses, organizations and individuals across 24 countries since it was formed two years ago. With the new innovations being announced today, the OpenPOWER Foundation continues to provide the technology and collaboration tools needed to deliver customized solutions and increased performance to customers, including hyperscale data centers and high performance computing organizations. OpenPOWER innovations are built upon by a growing community of more than 2,300 ISVs supporting Linux on POWER applications. + +The products revealed by OpenPOWER members today highlight: + +- **New Servers for High Performance Computing and Cloud Deployments** – Foundation members introduced more than 10 new OpenPOWER servers, offering expanded services for high performance computing and server virtualization. + - Rackspace has [announced](https://openpowerfoundation.org/press-releases/openpower-ecosystem-propels-open-innovation-in-data-center/) that “Barreleye” has moved from the lab to the data center. Rackspace anticipates “Barreleye” will move into broader availability throughout the rest of the year, with the first applications on the Rackspace Public Cloud powered by OpenStack. + - IBM, in collaboration with [NVIDIA](http://www.nvidia.com/content/global/global.php) and [Wistron](http://www.wistron.com/), plans to release its second-generation OpenPOWER high performance computing server, which includes support for the NVIDIA® Tesla® Accelerated Computing platform ([learn more](http://www.ibm.com/blogs/systems/ibm-power8-cpu-and-nvidia-pascal-gpu-speed-ahead-with-nvlink)). The server will leverage POWER8 processors connected directly to the new NVIDIA Tesla P100 GPU accelerators via the NVIDIA NVLink™ high-speed interconnect technology. Early systems will be available in Q4 2016. Additionally, IBM and NVIDIA plan to create global acceleration labs to help developers and ISVs port applications on the POWER8 and NVIDIA NVLink-based platform. + - With planned availability in April, the [TYAN GT75-BP012](http://www.tyan.com/1U_Sever_GT75-BP012.html) is a 1U, POWER8-based server solution with the ppc64 architecture. The OpenPOWER-based platform offers exceptional capability for in-memory computing in a 1U implementation. +- **Expanded use of CAPI for Acceleration Technology** – Foundation members, including Bittware, IBM, Mellanox and Xilinx, unveiled more than a dozen new accelerator solutions based on the Coherent Accelerator Processor Interface (CAPI). Alpha Data also unveiled a Xilinx FPGA-based CAPI hardware card at the Summit. These new accelerator technologies leverage CAPI to provide performance, cost and power benefits when compared to application programs running on a core or custom acceleration implementation attached via non-coherent interfaces. This is a key differentiator in building infrastructure to accelerate computation of big data and analytics workloads on the POWER architecture. +- **A Continued Commitment to Genomics Research** – Following successful collaborations with [LSU](http://www.lsu.edu/mediacenter/news/2015/07/30ored_openpower.php) and [tranSMART](https://openpowerfoundation.org/blogs/imperial-college-london-and-ibm-join-forces-to-accelerate-personalized-medicine-research-within-the-openpower-ecosystem/), OpenPOWER Foundation members continue to develop new advancements for genomics research. Today, [Edico Genome](http://www.edicogenome.com/) announced the DRAGEN Genomics Platform, a new appliance that enables ultra-rapid analysis of genomic data, reducing the time to analyze an entire genome from hours to just minutes, allowing healthcare providers to identify patients at higher risk for cancer before conditions worsen. DRAGEN’s unprecedented speed is being leveraged to rapidly diagnose critically ill newborns, improve turnaround time for prenatal tests, and quickly identify infectious disease outbreaks.Developed in collaboration with Xilinx and IBM, the solution features Edico’s [DRAGEN processor](http://www.edicogenome.com/dragen/), which is based on [Xilinx’s Virtex-7 980T FPGA](http://www.xilinx.com/products/silicon-devices/fpga.html), running on [IBM Power Systems S822LC](http://www-03.ibm.com/systems/power/hardware/s822lc-commercial/buy.html). The combination of the POWER CPU, high memory bandwidth and DRAGEN accelerated speed and high accuracy will allow clients, such as [Rady Children’s Institute for Genomic Medicine of San Diego](http://www.rchsd.org/research/genomics-institute/), to leverage advanced analytics in genomics and life sciences. + +Additional information on the innovations announced today can be found in the [OpenPOWER fact sheet](https://openpowerfoundation.org/wp-content/uploads/2016/04/HardwareRevealFlyerFinal.pdf). + +**About OpenPOWER Foundation** The OpenPOWER Foundation is a global, open development membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. OpenPOWER members share expertise, investment and server-class intellectual property to develop solutions that serve the evolving needs of technology customers. + +The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. + +For further details visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org). diff --git a/content/blog/openpower-foundation-sc17-conference.md b/content/blog/openpower-foundation-sc17-conference.md new file mode 100644 index 0000000..d5eeb6c --- /dev/null +++ b/content/blog/openpower-foundation-sc17-conference.md @@ -0,0 +1,97 @@ +--- +title: "OpenPOWER Foundation Members to Present at SC17 Conference" +date: "2017-11-07" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "nvidia" + - "mellanox" + - "supercomputing" + - "openpower-foundation" + - "sc17" + - "supercomputing-17" + - "red-hat" + - "oak-ridge" + - "lawrence-livermore" +--- + +SC17 takes place November 12-17, 2017. Dedicated to showcasing work in high performance computing, networking, storage and analysis by the international HPC community, this year’s event is bound to be memorable. + +OpenPOWER Foundation members including IBM, NVIDIA, Mellanox, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory and Red Hat will be presenting a variety of talks, panels, research papers, tutorials, workshops, posters and Birds of a Feather sessions. Be sure to evaluate the sessions below and attend as many as possible at SC17! + +## **IBM** + +- [Application Porting and Optimization on GPU-Accelerated POWER Architectures](http://sc17.supercomputing.org/?post_type=page&p=5407&id=tut149&sess=sess232) +- [Charting the PMIxRoadmap](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof104&sess=sess308) +- [DOME Hot-Water Cooled MicroDataCenter](http://sc17.supercomputing.org/?post_type=page&p=5407&id=emt104&sess=sess195) +- [Eighth Annual Workshop for the Energy Efficient HPC Working Group (EE HPC WG)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp110&sess=sess144) +- [Joint International Workshop on Parallel Data Storage and Data Intensive Scalable Computing Systems (PDSW-DISCS)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp106&sess=sess109) +- [Making HPC Consumable: Helping Wet-Lab Chemists Access the Power of Computational Methods](http://sc17.supercomputing.org/?post_type=page&p=5407&id=imp107&sess=sess393) +- [OpenCAPI: High Performance, Host-Agnostic, Coherent Accelerator Interface](http://sc17.supercomputing.org/?post_type=page&p=5407&id=exforum116&sess=sess149) +- [P08: Performance Optimization of Matrix-free Finite-Element Algorithms within deal.II](http://sc17.supercomputing.org/?post_type=page&p=5407&id=post182&sess=sess293) +- [P58: Wharf: Sharing Docker Images across Hosts from a Distributed Filesystem](http://sc17.supercomputing.org/?post_type=page&p=5407&id=post231&sess=sess293) +- [P79: Porting the Opacity Client Library to a CPU-GPU Cluster Using OpenMP 4.5](http://sc17.supercomputing.org/?post_type=page&p=5407&id=post147&sess=sess293) +- [PowerAPI, GEOPM and Redfish: Open Interfaces for Power/Energy Measurement and Control](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof168&sess=sess355) +- [Topology-Aware GPU Scheduling for Learning Workloads in Cloud Environments](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pap251&sess=sess164) +- [Towards a Composable Computer System](http://sc17.supercomputing.org/?post_type=page&p=5407&id=emt101&sess=sess195) +- [Workshop on Education for High Performance Computing (EduHPC)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp154&sess=sess119) + +## **NVIDIA** + +- [Application Porting and Optimization on GPU-Accelerated POWER Architectures](http://sc17.supercomputing.org/?post_type=page&p=5407&id=tut149&sess=sess232) +- [How Serious Are We About the Convergence Between HPC and Big Data?](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pan128&sess=sess256) +- [Interactivity in Supercomputing](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof204&sess=sess327) +- [OpenACC API User Experience, Vendor Reaction, Relevance, and Roadmap](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof192&sess=sess336) +- [Scalable Parallel Programming Using OpenACC for Multicore, GPUs, and Manycore](http://sc17.supercomputing.org/?post_type=page&p=5407&id=tut135&sess=sess224) +- [Toward Standardized Near-Data Processing with Unrestricted Data Placement for GPUs](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pap567&sess=sess161) +- [Understanding Error Propagation in Deep Learning Neural Network (DNN) Accelerators and Applications](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pap565&sess=sess178) + +## **Mellanox** + +- [Accelerating Big Data Processing and Machine/Deep Learning Middleware on Modern HPC Clusters](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof175&sess=sess385) +- [Charting the PMIx Roadmap](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof104&sess=sess308) +- [Interconnect Your Future with Mellanox “Smart” Interconnect](http://sc17.supercomputing.org/?post_type=page&p=5407&id=exforum117&sess=sess153) +- [Why Is MPI So Slow? Analyzing the Fundamental Limits in Implementing MPI-3.1](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pap554&sess=sess163) + +## **Oak Ridge National Laboratory** + +- [2nd International Workshop on Post Moore's Era Supercomputing (PMES)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp109&sess=sess116) +- [8th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp115&sess=sess114) +- [Application Porting and Optimization on GPU-Accelerated POWER Architectures](http://sc17.supercomputing.org/?post_type=page&p=5407&id=tut149&sess=sess232) +- [Best Practices for Architecting Performance and Capacity in the Burst Buffer Era](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pan125&sess=sess255) +- [Eighth Annual Workshop for the Energy Efficient HPC Working Group (EE HPC WG)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp110&sess=sess144) +- [Exascale Challenges and Opportunities](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof172&sess=sess351) +- [Fourth SC Workshop on Best Practices for HPC Training](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp120&sess=sess134) +- [HPC Education: Meeting of the SIGHPC Education Chapter](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof225&sess=sess310) +- [Interactivity in Supercomputing](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof204&sess=sess327) +- [Machine Learning in HPC Environments](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp117&sess=sess112) +- [OpenACC API User Experience, Vendor Reaction, Relevance, and Roadmap](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof192&sess=sess336) +- [Post Moore Supercomputing](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pan114&sess=sess249) +- [Regression Testing and Monitoring Tools](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof176&sess=sess347) +- [Scalable HPC Visualization and Data Analysis Using VisIt](http://sc17.supercomputing.org/?post_type=page&p=5407&id=tut113&sess=sess208) +- [Scientific User Behavior and Data-Sharing Trends in a Petascale File System](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pap180&sess=sess169) +- [Software Engineering and Reuse in Computational Science and Engineering](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof144&sess=sess374) +- [Software Engineers: Careers in Research](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof149&sess=sess354) +- [The 2nd International Workshop on Data Reduction for Big Scientific Data (DRBSD-2)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp111&sess=sess123) +- [TOP500 - Past, Present, Future](http://sc17.supercomputing.org/?post_type=page&p=5407&id=inv114&sess=sess183) +- [Total Cost of Ownership and HPC System Procurement](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof224&sess=sess311) + +## **Lawrence Livermore National Laboratory** + +- [4th International Workshop on HPC User Support Tools (HUST-17)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp113&sess=sess131) +- [Eighth Annual Workshop for the Energy Efficient HPC Working Group (EE HPC WG)](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp110&sess=sess144) +- [Machine Learning in HPC Environments](http://sc17.supercomputing.org/?post_type=page&p=5407&id=wksp117&sess=sess112) +- [Modeling and Simulation of Communication in HPC Systems](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof194&sess=sess335) +- [P79: Porting the Opacity Client Library to a CPU-GPU Cluster Using OpenMP 4.5](http://sc17.supercomputing.org/?post_type=page&p=5407&id=post147&sess=sess293) +- [P82: Performance Evaluation of the NVIDIA Tesla P100: Our Directive-Based Partitioning and Pipelining vs. NVIDIA’s Unified Memory](http://sc17.supercomputing.org/?post_type=page&p=5407&id=post203&sess=sess293) +- [P94: Fully Hierarchical Scheduling: Paving the Way to Exascale Workloads](http://sc17.supercomputing.org/?post_type=page&p=5407&id=post154&sess=sess293) +- [Performance Modeling under Resource Constraints Using Deep Transfer Learning](http://sc17.supercomputing.org/?post_type=page&p=5407&id=pap605&sess=sess167) +- [Power-Aware High Performance Computing: Challenges and Opportunities for Application and System Developers](http://sc17.supercomputing.org/?post_type=page&p=5407&id=tut173&sess=sess242) +- [The Green500: Trends in Energy-Efficient Supercomputing](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof173&sess=sess349) +- [Using HPC to Impact US Manufacturing through the HPC4Mfg Program](http://sc17.supercomputing.org/?post_type=page&p=5407&id=imp106&sess=sess279) + +## **Red Hat** + +- [Ceph Applications in HPC Environments](http://sc17.supercomputing.org/?post_type=page&p=5407&id=bof121&sess=sess348) diff --git a/content/blog/openpower-foundation-technology-leaders-unveil-hardware-solutions-to-deliver-new-server-alternatives.md b/content/blog/openpower-foundation-technology-leaders-unveil-hardware-solutions-to-deliver-new-server-alternatives.md new file mode 100644 index 0000000..7a6f9d6 --- /dev/null +++ b/content/blog/openpower-foundation-technology-leaders-unveil-hardware-solutions-to-deliver-new-server-alternatives.md @@ -0,0 +1,91 @@ +--- +title: "OpenPOWER Foundation Technology Leaders Unveil Hardware Solutions To Deliver New Server Alternatives" +date: "2015-03-18" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +_Technology Movement Backed by Google, IBM, NVIDIA, Mellanox and Tyan to Transform Data Center with World’s First Open Server Architecture_ Rapidly Expanding Ecosystem Fueled by More than 100 Members Worldwide _Working on More than 100 Innovations_ + +_POWER8 Processors Offer Nearly 60% Better Price-Performance than Alternative Chips_ + +**OPENPOWER SUMMIT, San Jose, Calif.** **– March 18, 2015 –** The [OpenPOWER Foundation](http://www.openpowerfoundation.org/) today announced more than ten hardware solutions **–** spanning systems, boards, and cards, and a new microprocessor customized for China. Built collaboratively by [OpenPOWER members](https://openpowerfoundation.org/membership/current-members/), the new solutions exploit the POWER architecture to provide more choice, customization and performance to customers, including hyperscale data centers. + +The OpenPOWER Foundation which is a collaboration of technologists encouraging the adoption of an open server architecture for computer data centers has grown to more than 110 businesses, organizations and individuals across 22 countries. IBM’s POWER architecture is the cornerstone of innovation for the OpenPOWER Foundation, creating a computing platform available to all. + +Members and customers recognize the technical benefits of the POWER architecture. The POWER8 microprocessor is the first processor designed from the ground up for Big Data and analytics workloads. With the best of breed alternative chip estimated to be priced 50% higher (1), the POWER8 processor utilized by OpenPOWER members and others can enable the design of systems that deliver better performance (2) **–** projected at nearly 60% (3) better performance per dollar spent on processors. + +“Since our first public event just under one year ago, the OpenPOWER Foundation has expanded dramatically and enabled the development of a new breed of data center technology products worldwide,” said Gordon MacKean, OpenPOWER Foundation Chair. “Through our members’ individual and collective efforts we are positively changing the game, delivering innovations that advance data center technology, expand choice and drive market efficiency.” + +Among the products and prototypes OpenPOWER members revealed today are: + +- **Prototype of IBM’s first OpenPOWER high performance computing server on the path to exascale** – IBM and Wistron are jointly developing the first OpenPOWER-based high performance computing server using technology from NVIDIA and Mellanox. The system will be the debut offering in a series of solutions to be introduced as part of IBM's OpenPOWER technical computing roadmap, which includes IBM’s future delivery of two systems to Lawrence Livermore and Oak Ridge National Laboratories. The systems are predicted to be five to 10 times faster than today’s leading supercomputers. +- **First commercially available OpenPOWER server, the TYAN TN71-BP012** – With planned availability in the second quarter of 2015, the [TYAN TN71-BP012](http://www.tyan.com/solutions/tyan_openpower_system.html) servers are designed for large-scale cloud deployments and follow Tyan’s highly successful OpenPOWER [customer reference system](http://www.tyan.com/newsroom_pressroom_detail.aspx?id=1648) introduced in October 2014. IBM will be among the first to deploy the new servers as part of its SoftLayer infrastructure, utilizing them for a [new bare metal service](http://www-03.ibm.com/press/us/en/pressrelease/46238.wss) offering. +- **First GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950** – The [Cirrascale RM4950](http://www.cirrascale.com/products_rackmount_RM4950.aspx) is the result of collaboration between NVIDIA, Tyan and one of the OpenPOWER Foundation’s newest members, Cirrascale. Immediately available for order and shipping in volume in the second quarter of 2015, the platform supports the development of GPU-accelerated big data analytics, deep learning, and scientific computing applications. +- **Open server specification and motherboard mock-up combining OpenPOWER, Open Compute and OpenStack** – Rackspace, a managed cloud company, revealed an [open server design](http://www.rackspace.com/blog/openpower-opening-the-stack-all-the-way-down/) and prototype motherboard, combining OpenPOWER and Open Compute design concepts. The new design, targeted to run OpenStack services and be deployed in Rackspace data centers, will draw upon a wide range of open innovations to deliver users improved performance, value, and features. + +Other member-developed solutions revealed leverage Coherent Accelerator Processor Interface (CAPI), a unique feature built into the POWER architecture. CAPI provides members and other technology companies the ability to build solutions right on top of the POWER architecture. New CAPI-based solutions include the [ConnectX-4 adapter card](http://www.mellanox.com/page/products_dyn?product_family=201&mtag=connectx_4_vpi_card) by Mellanox, Convey’s CAPI developer kit leveraging Xilinix FPGA-based co-processors, and shared virtual memory between a Stratix V FPGA accelerator and a POWER8 CPU developed by Altera and IBM. These OpenPOWER CAPI-based solutions join Nallatech’s [OpenPOWER CAPI Developer Kit](http://www.nallatech.com/nallatech-collaborates-with-openpower-foundation-members-ibm-and-altera-to-launch-innovative-capi-fpga-accelerator-platform/) developed by Nallatech in collaboration with Altera and IBM and released November 2014. + +**The Power of OpenPOWER in China** + +OpenPOWER Foundation members also revealed products under development in China, where the OpenPOWER ecosystem is providing Chinese technology companies the option to build custom solutions and accelerate local innovation. + +At the center of China’s emerging OpenPOWER-based ecosystem is CP1, the first POWER chip for the China market, from a Chinese chip design company named PowerCore. The first China OpenPOWER system with CP1 will come to market this year. CP1 will be utilized by Zoom Netcom for a new line of servers called RedPower, the first China OpenPOWER two-socket system coming to market in 2015. Additional Chinese OpenPOWER members, including ChuangHe, shared designs for China-branded OpenPOWER systems incorporating POWER8 processors which have planned availability in 2015. + +These announcements follow an endorsement of OpenPOWER in the fall of 2014 by the Chinese government through the formation of the China POWER Technology Alliance (CPTA), a public-private partnership. In order to drive innovation and opportunity for China based companies, the major mission for CPTA is to promote the upgrading of China’s industrial structure through the integration of Chinese local and the OpenPOWER ecosystem resources under the guidance of Chinese government. CPTA, through the international cooperation to lead POWER technology, will create the world’s top technology solutions that leverage the latest Big Data and cloud computing capabilities and apply these outcomes in bank, telecommunications, energy, transportation, internet and Smarter City technology initiatives in China. + +Cross-Community Collaboration Drives More Open Solutions + +The OpenPOWER Foundation also announced the formation of the OpenPOWER Advisory Group, a formal mechanism for engaging with other open development organizations. Inaugural members of the Advisory Group represent the Linux Foundation, the Open Compute Project and the China POWER Technology Alliance (CPTA). The Advisory Group will provide guidance to the OpenPOWER Board of Directors and serve as a forum for support and collaboration between communities with open approaches to infrastructure and software development. + +**About the OpenPOWER Foundation** + +The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers. + +- OpenPOWER enables collaborative innovation for shared building blocks +- OpenPOWER supports independent innovation by members +- OpenPOWER builds on industry leading technology +- OpenPOWER thrives as an open development community + +For further details, a full membership roster, and guidance on getting involved in the OpenPOWER Foundation, visit [www.openpowerfoundation.org](https://openpowerfoundation.org/). + +**Footnotes** + +\[1\] Pricing is based on estimates from the Linley Group December 29, 2014 report POWER8 Hits the Merchant Market which states "Pricing is no contest. We estimate that IBM’s 12-core Power8 will list for $2,500; add $180 or $360 for two or four buffer chips. Intel hasn’t published a list price for the Xeon E5-2699v3, but after surveying some Internet re-sellers, we estimate it lists for about $4,100." More information can be found at: [http://www.linleygroup.com/newsletters/newsletter\_detail.php?num=5275](http://www.linleygroup.com/newsletters/newsletter_detail.php?num=5275) + +\[2\] Performance is based on SPEC CPU2006 publishes and projections based on performance results as of February 28, 2015. SPEC® and the benchmark name SPECCPU® are registered trademarks of the Standard Performance Evaluation Corporation. For more information about SPECCPU2006, see [https://www.spec.org/cpu2006/](https://www.spec.org/cpu2006/) + +OpenPOWER performance is estimated from an IBM Power System S824 published result with 24 cores / 192 threads, POWER8; 3.5GHz, 512 GB memory, RHEL 7.1 and extrapolated to a single socket OpenPOWER based 12 cores / 96 threads POWER8; 3.1 GHz. + +Competitive performance result is published on NEC Corporation Express5800/R120f-1M (Intel Xeon E5-2699 v3) with (per socket) 18 cores / 36 threads; Intel E5-2699 v3; 2.3 GHz; 128 GB, RHEL 6.5. + +\[3\] Price-performance is derived from the pricing in \[1\] and performance in \[2\]. + +\# # # + +**Supporting Quotes** + +"Collaborating across our open development communities will accelerate and broaden the raw potential of a fully open data center. We have a running start together and look forward to technical collaboration and events to engage our broader community." **– Corey Bell, CEO Open Compute Project** + +“By leveraging the CAPI technology designed for the IBM POWER8 servers and support from the OpenPOWER Foundation team, it allowed Algo-Logic Systems to develop world class ultra-low-latency Full Order-Book in FPGA logic that will benefit the Financial Services industry.” **– John Lockwood, CEO Algo-Logic** + +“The future hardware architecture of OpenPOWER which shall include the next generation POWER processors, ultra high memory density systems, and NVIDIA’s NVLink interconnect system, will provide the hardware platform for GPUdb that will unleash a massive performance improvement in every facet of operation. This will create even more performance improvements in the ability to ingest and conduct on the fly analytics on ultra-high velocity big data feeds.” **– Amit Vij, CEO, GPUdb** + +"OpenPOWER started off as an idea that immediately resonated with our technology partners to strengthen their scale out implementations like analytics. Now, OpenPOWER is fundamental to every conversation IBM is having with clients -- from HPC to scale out computing to cloud service providers. Choice, freedom and better performance are strategic imperatives guiding customers around the globe, and OpenPOWER is leading the way." **– Ken King, General Manager OpenPOWER Alliances, IBM** + +“We expect OpenPOWER to broaden the scope of available supercomputing solutions and products which is crucial for us as a leading provider of supercomputing resources. Integrating POWER processor technologies and high-performance GPUs opens an exciting path towards power-efficient exascale computing.” **– Dr. Dirk Pleiter, Julich Supercomputing Center** + +"The Linux Foundation's mission includes supporting open and collaborative development to advance key technologies and transform markets. OpenPOWER is already resonating across many dimensions and stakeholders for both hardware and software and the time has never been more right for cross-collaboration among these communities.” **– Mike Dolan, Sr. Director of Strategic Programs, The Linux Foundation** + +"The prototype of IBM's system revealed today is the first in a series of new high-density Tesla GPU-accelerated servers for OpenPOWER high-performance computing and data analytics. IBM plans to build upon this offering with follow-on systems, adding future-generation ‘Pascal’ GPUs with the NVIDIA NVLink high-speed GPU interconnect technology to help set the stage for exascale computing." **– Sumit Gupta, General Manager of Accelerated Computing, NVIDIA** + +“China POWER Technology Alliances (CPTA) was established in order to accelerate the speed of China secured and trusted IT industry chain building, by leveraging OpenPOWER technology. CPTA joining the Advisory Group of OpenPOWER will be a significant milestone for engaging China into the global POWER ecosystem, and opening the development community to drive further POWER innovations through the deep collaboration between communities.” **– Mr. Zhu Ya Dong, Chairman of Suzhou PowerCore** + +“As a new open platform, OpenPOWER provides the prototype system that conforms to the specifications in order to satisfy the demands of the development of the OpenPOWER ecosystem in China.” **– Mr. Zhiqiang Tian, Senior Engineer, BIOS research and development, TEAMSUN** + +“The development of the OpenPOWER ecosystem in China’s high security level market enriches China ISV and IHV’s options for a total solution from hardware to software.” **– Mr. Zhiqiang Tian, Senior Engineer, BIOS research and development, TEAMSUN** + +**Media Contact:** Grace Pai-Leonard Text100 Public Relations Email: [grace.pai@text100.com](mailto:grace.pai@text100.com) Phone: 212-871-5194 diff --git a/content/blog/openpower-foundation-unveils-first-innovations-and-roadmap.md b/content/blog/openpower-foundation-unveils-first-innovations-and-roadmap.md new file mode 100644 index 0000000..3c5f081 --- /dev/null +++ b/content/blog/openpower-foundation-unveils-first-innovations-and-roadmap.md @@ -0,0 +1,32 @@ +--- +title: "OpenPOWER Foundation Unveils First Innovations and Roadmap" +date: "2014-04-23" +categories: + - "press-releases" + - "blogs" +tags: + - "openpower" + - "samsung" + - "ibm" + - "google" + - "power8" + - "mellanox" + - "press-release" +--- + +San Francisco, CA – Open Innovation Summit – 23 April 2014 – The [OpenPOWER Foundation](https://openpowerfoundation.org/), an open development community dedicated to accelerating data center innovation, today took its first steps to deliver transformative system designs based on [IBM’s new POWER8 processor](http://www-03.ibm.com/press/us/en/pressrelease/43702.wss). At the Open Innovation Summit today, with over 100 leading industry executives and technologists on hand, the Foundation showed the first reference board and OEM systems, and innovations including many forms of acceleration, advanced memory and networking. OpenPOWER has grown to more than two dozen members including global hardware and software thought leaders. Formed by Google, IBM, Mellanox Technologies, NVIDIA, and Tyan, the Foundation makes POWER hardware and software available for open development, as well as POWER intellectual property licensable to other manufacturers. OpenPOWER is greatly expanding the ecosystem of innovators providing value back to the industry and end users. “We are very pleased with the growth of the OpenPOWER community and the progress made by the Working Group members even at this early stage,” said Gordon MacKean, Chairman, OpenPOWER Foundation. “The projects feeding the innovation pipeline to date will greatly enhance the performance of the next generation of servers by eliminating system-level bottlenecks.” **Initial OpenPOWER Designs** At the summit, the OpenPOWER Foundation presented its first white box server details including a development and reference design from Tyan, and firmware and operating system developed by IBM, Google, and Canonical. The OpenPOWER Software stack in this white box design is targeted for ease of implementation in hybrid deployments. IBMnoted it will be deploying systems leveraging this OpenPOWER hardware and software stack in Softlayer later this year. Information on OpenPOWER projects is available on the Foundation’s new web site, [www.openpowerfoundation.org](https://openpowerfoundation.org/). **Example Innovative Solutions** OpenPOWER also announced new ways to use POWER-based technologies to address critical big data, cloud, and application challenges facing modern data centers. An early live demonstration of these innovations will be performed at the IBM Impact 2014 Global Conference, Las Vegas Nevada, April 27 – May 1. These include: + +- **Mellanox RDMA exploitation on POWER** – Using RDMA a 10X throughput andlatency improvement of Key Value Store applications was described. These capabilities will be further accelerated with future exploitation of POWER8 capabilities. +- **NVIDIA GPU Accelerators** – NVIDIA is adding CUDA software support for NVIDIA GPUs with IBM POWER CPUs. IBM and NVIDIA are demonstrating the first GPU accelerator framework for Java, showing an order of magnitude performance improvement on Hadoop Analytics applications compared to a CPU-only implementation. NVIDIA will offer its NVLinkTM high-speed GPU interconnect as a licensed technology to OpenPOWER Foundation members. +- **Xilinx FPGA accelerator with CAPI attach** – IBM described a memcached Key Value Store showing a 35X power/performance improvement with an order of magnitude latency reduction. +- **Altera FPGA accelerator with CAPI attach** - IBM described a Monte Carlo financial instruments model with a 200X speedup. +- **Micron, Samsung Electronics, and SK Hynix memory** – Each of these innovative memory companies is committed to supporting the OpenPower Foundation through the supply of memory and storage components for an open ecosystem. + +**New OpenPOWER Foundation Members** Twenty-five members have joined OpenPOWER including Canonical, Samsung Electronics, Micron, Hitachi, Emulex, Fusion-IO, SK Hynix, Xilinx, Jülich Supercomputer Center, Oregon State University, and several others since OpenPOWER formed as a legal entity in December 2013. **About OpenPOWER Foundation** The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers. + +- OpenPOWER enables collaborative innovation for shared building blocks +- OpenPOWER supports independent innovation by members +- OpenPOWER builds on industry leading technology +- OpenPOWER thrives as an open development community + +For further details, a full membership roster, and getting involved in the OpenPOWER Foundation, visit [www.openpowerfoundation.org](https://openpowerfoundation.org/). Contact: Calista Redmond Director, Business Development OpenPOWER Foundation Email membership@open-power.org Phone 720-396-4384 [Download Press Release](https://openpowerfoundation.org/wp-content/uploads/2014/04/OpenPOWER-April-23-press-release-5-pm-4-22-14.pdf) diff --git a/content/blog/openpower-guide-to-beyond-sc15-fun-in-austin.md b/content/blog/openpower-guide-to-beyond-sc15-fun-in-austin.md new file mode 100644 index 0000000..fd48ad1 --- /dev/null +++ b/content/blog/openpower-guide-to-beyond-sc15-fun-in-austin.md @@ -0,0 +1,10 @@ +--- +title: "OpenPOWER's Guide to Beyond SC15: Fun in Austin" +date: "2015-11-14" +categories: + - "blogs" +tags: + - "sc15" +--- + +[![ThingsToDo_Austin_Infographic](images/ThingsToDo_Austin_Infographic-347x1024.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/11/ThingsToDo_Austin_Infographic.pdf) diff --git a/content/blog/openpower-guide-to-sc15-exhibitor-floor-map.md b/content/blog/openpower-guide-to-sc15-exhibitor-floor-map.md new file mode 100644 index 0000000..d8d084e --- /dev/null +++ b/content/blog/openpower-guide-to-sc15-exhibitor-floor-map.md @@ -0,0 +1,10 @@ +--- +title: "OpenPOWER's Guide to SC15: Exhibitor Floor Map" +date: "2015-11-14" +categories: + - "blogs" +tags: + - "sc15" +--- + +[![OpenPower_SC15_FloorMap-01](images/OpenPower_SC15_FloorMap-01-1024x663.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/11/OpenPower_SC15_FloorMap.pdf) diff --git a/content/blog/openpower-hardware-shines-open-source-summit.md b/content/blog/openpower-hardware-shines-open-source-summit.md new file mode 100644 index 0000000..8e1a3b9 --- /dev/null +++ b/content/blog/openpower-hardware-shines-open-source-summit.md @@ -0,0 +1,34 @@ +--- +title: "OpenPOWER Open Hardware Shines at Open Source Summit North America" +date: "2018-09-06" +categories: + - "blogs" +tags: + - "featured" +--- + +By: Hugh Blemings, Executive Director, OpenPOWER Foundation + +\[caption id="attachment\_5637" align="alignleft" width="150"\][![](images/Hugh-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2018/08/Hugh.jpg) "The OpenPOWER ecosystem includes the fastest and most open production systems available today - from workstation to hyperscale." - Hugh Blemings, Executive Director, OpenPOWER Foundation\[/caption\] + +Last week I was fortunate enough to attend the [Linux Foundation's](https://www.linuxfoundation.org/) [Open Source Summit North America](https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/) (OSS/NA) in Vancouver, Canada representing the OpenPOWER Foundation. The LF of course needs little introduction to this audience, but their OSS events really are a shining example of the power of Open communities and open collaboration. + +In the OpenPOWER Foundation booth, we arranged to have hardware from one of our members to show off to attendees – [the Talos II workstation from Raptor Engineering.](https://www.raptorcs.com/) + +And what a workstation it is. As we received it, it was running the latest release of Debian, [TDE](https://trinitydesktop.org/) desktop – all the usual fantastic tools you'd expect on a modern Linux Desktop. Of course the hardware runs just as sweet with Red Hat Linux/Fedora or SuSE/OpenSuSE. + +It's a nice fast machine, too. Two socket quad core Power9 goodness (32 Threads, woo!), Gen4 PCIe and _lots_ of DDR4 RAM channels. You barely hear the machine when running due to its nice, thermally cool design. Turns out you can spec a machine up to 22 cores/socket for a monster 176 threads if you want, and it’ll still won’t heat up your room much. + +(For the record, all this computing power did little to improve my gaming abilities when I tried [Xonotic](https://www.xonotic.org/)…) + +https://twitter.com/hughhalf/status/1035568611174305792 + +Performance aside, there was one consistent theme in the majority of conversations we had with conference attendees that really resonated with folks – the openness of OpenPOWER systems. The Talos II as configured was running entirely libré software – bootloader/firmware, [OpenBMC](https://openbmc.org/) as well as the OS itself of course. Literally no executable binary blobs on the machine. In fact, when Raptor ships them you not only get source code for the software, you get schematics for the system too. + +We’re very fortunate in the OpenPOWER ecosystem to have systems that demonstrate both the openness and breadth of what OpenPOWER represents – from workstation all the way to hyperscale. And while we did not have a [Google/Rackspace Zaius/Barrelleye hyperscale server](https://openpowerfoundation.org/blogs/openpowerchat-zaius-barreleye-g2/) at the conference, it was very much there in spirit as an entirely open design (the hardware design is shared through the [Open Compute Project](https://www.opencompute.org/)). + +This, we believe, makes OpenPOWER systems the fastest and most open production systems available today. No funny little black-box management engines running on the CPU either... + +If you'd like to hear more about what's happening across the OpenPOWER ecosystem, our [OpenPOWER Summit Europe](https://openpowerfoundation.org/summit-2018-10-eu/) takes place next month in Amsterdam. With a theme of “Open the Future,” it will feature a bunch of technical sessions, an exhibition of solutions from our members and some industry changing announcements too!  + +P.S. Oh and the two days before our event, check out the [Open Compute Summit](https://www.opencompute.org/summit/regional-summit-2018) in the same conference centre diff --git a/content/blog/openpower-host-os-repository-launches-on-github.md b/content/blog/openpower-host-os-repository-launches-on-github.md new file mode 100644 index 0000000..414afb4 --- /dev/null +++ b/content/blog/openpower-host-os-repository-launches-on-github.md @@ -0,0 +1,22 @@ +--- +title: "OpenPOWER Host OS Repository Launches on GitHub!" +date: "2016-07-22" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Ricardo Marin Matinata, Linux Architect, KVM and Cloud on POWER, IBM_ + +The initial version 0.5 (beta) of the OpenPOWER HostOS repository is available! + +As new OpenPOWER hardware features and servers are developed by multiple partners, it becomes a challenge to deploy them in an OS environment that leverages a known and tried base and, at the same time, allows for the flexibility that is required to support the diversity of requirements. To address this challenge, IBM is launching a new collaboration model: an open community for OpenPOWER hardware enablement and features that is built on top of a reference Host OS/KVM for the Power architecture. + +Through this community, IBM and OpenPOWER are providing an open source repository that is seeded with the the core elements, allowing OpenPOWER partners to build and validate their own deliverables. This repository includes the core kernel as well as other key component pieces to enable KVM virtualization, along with build scripts and a validation suite.  These components enable members of the OpenPOWER ecosystem to build their own Host OS with the optional support of KVM on Power and, most importantly, allow them to contribute back to this community. Even further, an additional usage model that this repository provides is an abstraction layer that is based upon KVM virtualization. This option allows OpenPOWER partners to deploy new hardware features and servers while maintaining a stable environment for guest operating systems. + +While IBM remains committed to each respective upstream community, this new community will help all to advance the OpenPOWER ecosystem and ensure some feature consistency. Stay tuned for version 1.0, which will bring additional stability and more Linux enablement for OpenPOWER innovations, such as new processor features, as well as advancements on virtualization technology. + +## To get started, more information is available at the OpenPower HostOS Github portal: [https://github.com/open-power-host-os/builds](https://github.com/open-power-host-os/builds) + +## The full collection of components can be found here: [https://github.com/orgs/open-power-host-os](https://github.com/orgs/open-power-host-os) diff --git a/content/blog/openpower-in-2020-a-year-in-review.md b/content/blog/openpower-in-2020-a-year-in-review.md new file mode 100644 index 0000000..aff1a19 --- /dev/null +++ b/content/blog/openpower-in-2020-a-year-in-review.md @@ -0,0 +1,62 @@ +--- +title: "OpenPOWER in 2020: a year in review" +date: "2020-12-21" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "oak-ridge-national-laboratory" + - "power-isa" + - "covid-19" + - "coronavirus" + - "james-kulina" + - "lawrence-livermore-national-laboratory" +--- + +**James Kulina, executive director, OpenPOWER Foundation** + +Can you believe it’s already the end of year, as we’re getting ready to close out the year and wave goodbye to 2020? It’s certainly hard for me to believe that it’s only been six months since I [joined the OpenPOWER Foundation](https://openpowerfoundation.org/openpower-foundation-executive-director-seeks-to-accelerate-ecosystem-growth/) as executive director. + +It has been a one-of-a-kind year for all of us - and yet, despite it all we’ve managed to accomplish great things as an open source community. The title of Linux Foundation’s annual report summarizes it nicely: [advancing open collaboration amid the challenges of a lifetime](https://www.linuxfoundation.org/blog/2020/12/download-the-2020-linux-foundation-annual-report/). + +When I reflect back on 2020, there are three things that stick out to me. I’ll remember 2020 as the year that POWER technology was used for the most noble purposes, the year of the community coming together, and the year in which we made strides to develop a fully open ecosystem surrounding the POWER ISA. + +## POWER used for good + +I was inspired to see all the different ways POWER technology contributed to a healthier world. When the coronavirus spread across the world, [OpenPOWER members jumped into action](https://openpowerfoundation.org/openpower-foundation-members-help-combat-covid-19/). + +- Scientists from Lawrence Livermore National Laboratory and Oak Ridge National Laboratory worked on research to help understand the outbreak and develop treatments. +- Members such as CINECA, Barcelona Supercomputing Center and Jülich Supercomputing Centre participated in the Exscalate4CoV program to help study the coronavirus and identify solutions to more quickly address pandemic situations. +- IBM helped to launch the COVID-19 High Performance Computing Consortium with the White House Office of Science and Technology Policy and the U.S. Department of Energy. +- Other members, like Nimbix and NVIDIA, provided complementary resources to others working at the forefront of coronavirus research efforts. + +It shouldn’t come as a surprise that OpenPOWER members and technology are vital to important research efforts - it’s quite common. In fact, just this year [we shared details on research projects](https://openpowerfoundation.org/openpower-foundation-members-help-combat-covid-19/) that took place at Oak Ridge National Laboratory in areas like nuclear waste remediation, fusion energy, climate change, cancer research and pharmacology. + +I can’t wait to see how OpenPOWER members help solve new challenges in 2021. + +## A fully open ecosystem + +In 2020, we continued to develop a fully open source ecosystem and built on top of the recently [open sourced POWER ISA](https://newsroom.ibm.com/2019-08-21-IBM-Demonstrates-Commitment-to-Open-Hardware-Movement). New contributions made this year included: + +- [A2I POWER processor core](https://openpowerfoundation.org/a2i-power-processor-core-contributed-to-openpower-community-to-advance-open-hardware-collaboration/), a multi-threaded core designed for high streaming throughput +- [A2O POWER processor core](https://openpowerfoundation.org/openpower-foundation-introduces-ibm-hardware-and-software-contributions-at-openpower-summit-2020/), for enhanced single-thread performance +- Open Cognitive Environment (Open-CE), based on IBM’s PowerAI to improve consumability of AI and deep learning frameworks + +In addition to these open source contributions, other advances were made this year to help OpenPOWER developers and members create new technologies on Power. Antmicro joined the OpenPOWER Foundation this year, and announced support for the POWER ISA in Renode, its multi-architecture simulator for software and hardware co-development. + +[Antmicro’s Michael Gielda shared with us](https://openpowerfoundation.org/welcome-antmicro-to-the-openpower-foundation/) that, “when the POWER ISA became open source, given our strong belief in a vendor-neutral, multi-solution ecosystem that is needed to make open hardware a reality, it was only a matter of time for us to join OpenPOWER.” + +Enabling developers to test applications based on the POWER ISA was an important step in growing the OpenPOWER footprint, so we’re thrilled to have Antmicro’s support and collaboration. + +## Community momentum + +Despite a challenging environment this year, our community found new ways to connect and collaborate. This year’s OpenPOWER Summit was a virtual event for the first time, and it was the most highly attended event we’ve held to date. + +We also launched our OpenPOWER Foundation Slack channel, which has been instrumental in allowing organic collaboration and discussion between members of our community. Anyone interested in learning more about OpenPOWER can [quickly and easily join the channel.](https://join.slack.com/t/openpowerfoundation/shared_invite/zt-9l4fabj6-C55eMvBqAPTbzlDS1b7bzQ) + +Last but not least, we kicked off important collaborations with other Linux Foundation projects this year. Bridging the gap between various Linux Foundation communities across AI, Cloud Native, Edge, Networking and more will help us develop new applications for POWER technology. + +As 2020 comes to a close and we begin looking forward to the year ahead, I’m excited to continue building momentum for the OpenPOWER ecosystem. One of my biggest priorities for 2021 is making OpenPOWER more accessible to a wider audience. We have a number of initiatives in the pipeline that will help us achieve that goal - so stay tuned. + +What about you - what stands out to you most about open source hardware and software from 2020? And what’s your biggest priority heading into the new year? Let me know in the comments below! diff --git a/content/blog/openpower-isa-compliance-definition.md b/content/blog/openpower-isa-compliance-definition.md new file mode 100644 index 0000000..8a43b1b --- /dev/null +++ b/content/blog/openpower-isa-compliance-definition.md @@ -0,0 +1,30 @@ +--- +title: "OpenPOWER ISA Compliance Definition" +date: "2020-02-24" +categories: + - "blogs" +tags: + - "ibm" + - "power-isa" + - "openpower-isa" + - "compliance-work-group" +--- + +By: Sandy Woodward, OpenPOWER Foundation Compliance Work Group Chair, IBM Academy of Technology Member + +There is much excitement in the August 2019 announcement of open-sourcing the POWER Instruction Set Architecture (ISA), which provides the opportunity for experimentation and collaboration. When the hardware implementation of the ISA matures into a product, OpenPOWER ISA Compliance Definition comes in to play to ensure that software shown to execute properly on one compliant processor implementation will execute properly on a different also compliant processor implementation. How will the Compliance Work Group handle ISA compliance with the POWER ISA being open-sourced? Let me start with two OpenPOWER Compliance ISA specifications that are already available to share OpenPOWER compliance concepts before delving into that question. + +1. [The OpenPOWER ISA Compliance Definition, Revision 1.0](https://openpowerfoundation.org/?resource_lib=openpower-isa-compliance-definition) defines the test suite requirements to demonstrate OpenPOWER ISA Profile compliance for POWER8 systems and is based on the [IBM POWER ISA Version 2.07 B](https://openpowerfoundation.org/?resource_lib=ibm-power-isa-version-2-07-b). +2. [The OpenPOWER ISA Compliance Definition, Revision 2.0](https://openpowerfoundation.org/?resource_lib=openpower-isa-compliance-definition_review-draft) provides the test suite requirements to be able to demonstrate OpenPOWER ISA Profile compliance for POWER9 systems and is based on the [IBM POWER ISA Version 3.0 B](https://openpowerfoundation.org/?resource_lib=power-isa-version-3-0). + +In both of these documents, the testing of a processor implementation's compliance is not intended to show that the processor implementation under test is robust under all possible operating conditions, inputs, or event time interactions. It is intended to show that the processor implementation under test implemented the ISA as specified and the specification was interpreted by the processor developers as intended by the specification authors. + +The methodology for architectural compliance testing described in these two documents is based on scenarios. Each scenario describes a set of tests that should be performed, and the successful execution of all of these tests is necessary for complete compliance testing. There are two general categories of scenarios: instruction-driven scenarios and mechanism-driven scenarios. + +Instruction-driven scenarios define tests that execute a single instruction and are based on definitions of instruction behavior in the architecture specification. Each scenario deals with an aspect of the instruction behavior, such as setting a specific register field. For each instruction, executing all of its related scenarios is necessary for fully testing the interpretation of the instruction description. + +Mechanism-driven scenarios require execution of a sequence of one or more instructions and are based on definitions of mechanisms in the architecture specification, which may involve interactions between several instructions or architectural resources. Each scenario describes a different sequence of events related to the mechanism and executing all of the related scenarios is necessary for complete checking of the mechanism interpretation. + +Now back to the question from the beginning of this blog: How will the Compliance Work Group handle ISA compliance with the POWER ISA be open-sourced? A new OpenPOWER Work Group will be formed to focus on the Power ISA activities, including documenting POWER ISA Subset definition and requirements. The Compliance Work Group will develop OpenPOWER ISA Compliance Definition specifications for the POWER ISA Subsets to give compliance guidance to ensure that software shown to execute properly on one compliant processor implementation will execute properly on a different also compliant processor implementation of the same ISA Subset. + +If you have comments you would like to make on the OpenPOWER ISA Compliance Definition documents, you can submit them to the Compliance Work Group by emailing: [openpower-isa-thts@mailinglist.openpowerfoundation.org](mailto:openpower-isa-thts@mailinglist.openpowerfoundation.org). diff --git a/content/blog/openpower-makes-fpga-acceleration-snap.md b/content/blog/openpower-makes-fpga-acceleration-snap.md new file mode 100644 index 0000000..0e7147e --- /dev/null +++ b/content/blog/openpower-makes-fpga-acceleration-snap.md @@ -0,0 +1,64 @@ +--- +title: "OpenPOWER Makes FPGA Acceleration a “SNAP”" +date: "2016-10-27" +categories: + - "capi-series" + - "blogs" +tags: + - "featured" +--- + +_By Bruce Wile, CAPI Chief Engineer and Distinguished Engineer, IBM_ + +## Improving on the CAPI Base Technology + +In the datacenter, metrics matter.  Competition between application providers is fierce, with pressure to provide benchmarks that show continued competitive advantages in performance, price, and power.  Application level improvements rode the Moore’s law performance improvement curve for decades, and now require accelerator innovations to deliver the performance gains needed to maintain current clients and win new business.  FPGA acceleration has long been an option, but the difficult programming model and specialized computer engineering skills hindered FPGAs in mainstream datacenters. + +The biggest companies see this trend and have put significant resources into FPGA integration into the datacenter. But enabling FPGA acceleration for the masses has been a challenge. OpenPOWER’s Acceleration Workgroup is changing that. + +![capi-snap-neo4j-tile](images/CAPI-SNAP-Neo4j-Tile-1024x512.png) + +The [CAPI](http://ibm.biz/powercapi) infrastructure, introduced on POWER8 in 2014, provides the technology and ecosystem foundation to enable datacenter applications to integrate with FPGA acceleration.  The technology base has everything needed to support the datacenter—virtualization (for multiple simultaneous context calls), a threaded model (for programming ease), removal of the device driver overhead (performance enablement), and an open ecosystem (for the masses to build upon). + +As a result, FPGA experts around the world created CAPI accelerators, many of which are listed at [ibm.biz/powercapi\_examples](http://ibm.biz/powercapi_examples). These are creative, compelling acceleration algorithms that open doors to capabilities previously beyond reach. + +\[caption id="attachment\_4243" align="aligncenter" width="320"\]![faces](images/Faces.gif) Check out “Facial analysis for emotion detection” (ibm.biz/powercapi\_SS\_emotionDetect) from SiliconScapes for a slick example.\[/caption\] + +But there’s still a skills gap between the FPGA experts (computer engineers) and the programming experts working for most Independent Software Vendors (ISVs).  For FPGAs to deliver on their promise of higher performance at lower cost and lower power, we need further enablement for ISVs to embrace FPGA acceleration. + +“Extending the capability of the CAPI device will offer our engineers and ultimately our users with more options for working efficiently with complex connected data,” explains Philip Rathle, VP of Products at OpenPOWER member Neo4j. + +## Accelerating Acceleration + +Enter OpenPOWER and the Accelerator Workgroup.  At April 2016’s OpenPOWER Summit, multiple companies agreed to create a framework around CAPI. Two significant directives drove the work effort that followed: + +1. The framework would make it easy for programmers to call accelerators and write their own acceleration IP. +2. The framework would be open source to enable continued enhancements and cross company collaboration. + +Collaboration grew for building the framework, with significant contributions from IBM, Xilinx, Rackspace, Eiditicom, Reconfigure.io, Alpha-Data, and Nallatech.  Each company brought unique skills and perspectives to the effort, with a common goal of releasing the first version of the open source framework by the end of 2016. + +![capi-snap-levyx-tile](images/CAPI-SNAP-Levyx-Tile-1024x512.png) + +## Bringing Developers CAPI in a SNAP! + +Today, at [OpenPOWER Summit Europe](https://openpowerfoundation.org/openpower-summit-europe/), we are announcing the CAPI Storage, Networking, and Acceleration Programming Framework, or CAPI SNAP Framework.  The framework fulfills the initial visions of the team, and will grow beyond the first release.  Upon release, the framework, including source code, will be available for anyone to try via github. + +The framework is key for developers or anyone else looking to bring the power of FPGA acceleration to their data center. CAPI SNAP will: + +- Make it easy for developers to create new specialized algorithms for FPGA acceleration in high-level programming languages, like C++ and Go, instead of less user-friendly languages like VHDL and Verilog. +- Make FPGA acceleration more accessible to ISVs and other organizations to bring faster data analysis to their users. +- Leverage the OpenPOWER Foundation ecosystem to continually drive collaborative innovation. + +Levyx Chief Business Development Officer Bernie Wu already sees how CAPI SNAP can make an impact for the ISV. “Levyx is focused on accelerating Big Data Analytical and Transactional Operations to real-time velocities. The CAPI SNAP Framework will allow us to bring processing even closer to the data and simplify the programming model for this acceleration,” adding “we see the CAPI SNAP capability being used to initially boost or enable rich real-time analytics and stream processing in variety of increasingly Machine to Machine driven use cases.” + +## Learn More and Try CAPI SNAP for Yourself! + +For those interested in the CAPI SNAP Framework, we encourage you to watch for announcements at the OpenPOWER Summit Europe.  You can also read more about CAPI and its capabilities in the accelerated enterprise in our [CAPI series on the OpenPOWER Foundation blog](https://openpowerfoundation.org/blogs/capi-drives-business-performance/). + +Are you looking to explore CAPI SNAP for your organization’s own data analysis? Then apply to be an early adopter of CAPI SNAP by emailing us directly at [capi@us.ibm.com](mailto:capi@us.ibm.com). Be sure to include your name, organization, and the type of accelerated workloads you’d like to explore with CAPI SNAP. + +You will continue to see a drumbeat of activity around the framework, as we release the source code and add more and more capabilities in 2017. + +## **Additional CAPI SNAP Reading from OpenPOWER Members** + +Alpha-Data: [http://www.alpha-data.com/news.php](http://www.alpha-data.com/news.php) diff --git a/content/blog/openpower-open-compute-data-center.md b/content/blog/openpower-open-compute-data-center.md new file mode 100644 index 0000000..0afaf0c --- /dev/null +++ b/content/blog/openpower-open-compute-data-center.md @@ -0,0 +1,54 @@ +--- +title: "OpenPOWER and Open Compute Open the Data Center" +date: "2017-03-08" +categories: + - "blogs" +tags: + - "nvidia" + - "mellanox" + - "featured" + - "rackspace" + - "xilinx" + - "barreleye" + - "open-compute" + - "zaius" + - "alpha-data" + - "nvlink" + - "wistron" + - "e4-computing" + - "facebook" +--- + +_By Bryan Talik, President, OpenPOWER Foundation_ + +![Open Compute Summit 2017](images/open-compute-summit.jpg) + +The old adage “birds of a feather flock together” often proves true, and the members of the OpenPOWER Foundation are a testament to that wisdom. In fact, members of the OpenPOWER Foundation are coming together at the [2017 U.S. Open Compute Summit](http://www.opencompute.org/ocp-u.s.-summit-2017/) in Santa Clara, CA on March 8 to celebrate openness. + +## Living and Breathing Open Values + +To say that OpenPOWER and its members embrace open collaboration would be an understatement, and at Open Compute Summit our members are living by that value. + +Andy Walsh, [Xilinx](https://www.xilinx.com/) Director of Strategic Market Development and OpenPOWER Foundation Board member explains, “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”   + +“Open standards and communities lead to rapid innovation,” says Adam Smith, CEO of [Alpha Data](http://www.alpha-data.com/).  “We are proud to support the latest advances of OpenPOWER accelerator technology featuring Xilinx FPGAs. Alpha Data’s production-ready FPGA accelerator boards provide leading edge development platforms for the highest performance solutions, significantly reducing the development time required to accelerate applications using FPGAs.” + +Some members even see collaboration as they key to satisfying the performance demands that the computing market craves. + +“The computing industry is at an inflection point between conventional processing and specialized processing,” [said Aaron Sullivan, distinguished engineer at Rackspace](http://blog.rackspace.com/the-latest-zaius-barreleye-g2-open-compute-openpower-server/). “To satisfy this shift in our industry, Rackspace and Google announced an OCP-OpenPOWER server platform last year, codenamed Zaius and Barreleye G2. At the OCP Summit, both companies are putting on the first public display of Zaius and Barreleye G2, marking a radical step forward for OCP and our industry. This server platform will advance the performance, bandwidth and power consumption demands for emerging applications that leverage machine learning, cognitive systems, real-time analytics and big data platforms. We look forward to our continued work alongside Google, OpenPOWER, OpenCAPI, and other Zaius project members, sharing the benefits with contributors and consumers across the world.” + +## Get Hands on with OpenPOWER at Booth C10 + +To showcase all of the great Open Compute Project collaborations our members are designing, developing, and producing we’ll have our own booth for the second year in a row! Come join us at booth C10 at the Santa Clara Convention center to see the latest demonstrations: + +- Prototypes of the product [codenamed Zaius](https://blog.rackspace.com/first-look-zaius-server-platform-google-rackspace-collaboration), the Open Compute POWER9server platform designed by Google and Rackspace in collaboration with ODM partner Ingrasys Technology Inc., will be in the booth to see first-hand. In addition, Google and Rackspace published  the Zaius specification to Open Compute  in October 2016. Talk with engineers to learn about the specification process or to create a starting point for your own server design. +- Inventec will show a POWER9 OpenPOWER server based on the Zaius server specification.  +- Mellanox will showcase [ConnectX-5](http://www.mellanox.com/ocp/), their next generation networking adaptor that features 100Gb/s Infiniband and Ethernet. This adaptor supports PCIe Gen4 and CAPI2.0, providing a higher performance and coherent connection to the POWER9 processor vs. PCIe Gen3. +- Wistron and E4 Computing will showcase their [newly announced OCP-form factor POWER8 server](http://cms-en.e4company.com//media/36581/pr_e4ce_wistron_ocpsummit_march8.pdf). Featuring two POWER8 processors, four NVIDIA Tesla P100 GPUs with the NVLink interconnect, and liquid cooling, the new platform represents an ideal OCP-compliant HPC system. +- In collaboration with many partners – IBM, Xilinx, and Alpha Data will have a line-up of several FPGA adaptors designed for POWER8 and POWER9.  Featuring PCIe Gen3 CAPI1.0 for POWER8, PCIe Gen4 CAPI2.0 and 25G/s CAPI3.0 for POWER9, these new FPGAs bring acceleration to a whole new level.  OpenPOWER member engineers will be on-hand to provide information regarding the CAPI SNAP developer and programming framework as well as OpenCAPI. +- IBM will showcase their work performed at [Facebook's Disaggregate Lab](https://code.facebook.com/posts/1155412364497262/facebook-opens-lab-to-others-to-validate-infrastructure-software/) — where they tested Leopard and Knox with IBM's Spectrum Scale software. This demonstrates the flexibility of using high performance open hardware with software-defined storage. +- Additionally, IBM has previously tested POWER8-based OCP and OpenPOWER Barreleye servers with IBM's Spectrum Scale software, a full-featured global parallel file system with roots in High Performance Computing and now widely adopted in commercial enterprises across all industries for data management at petabyte scale. This work will also be shown at our booth. + +It is very exciting to see how the ecosystem is coming together to revolutionize the datacenter, and Open Compute Summit is a great opportunity to network and build collaborative relationships with other open-minded organizations. + +Hope to see you at the Summit and booth C10! diff --git a/content/blog/openpower-open-compute-rackspace-barreleye.md b/content/blog/openpower-open-compute-rackspace-barreleye.md new file mode 100644 index 0000000..236dd25 --- /dev/null +++ b/content/blog/openpower-open-compute-rackspace-barreleye.md @@ -0,0 +1,68 @@ +--- +title: "Rackspace, OpenPOWER & Open Compute: Full Speed Ahead with Barreleye" +date: "2015-10-06" +categories: + - "blogs" +tags: + - "openpower" + - "featured" + - "rackspace" +--- + +_By Aaron Sullivan, Senior Director and Distinguished Engineer, Rackspace_ + +In an open community, with great partners, it’s amazing how fast things get done.![barreleye fish](images/barreleye-fish.jpg) + +At the end of 2014, Rackspace announced its affiliation with [OpenPOWER](https://openpowerfoundation.org). At that time, we shared our intention to build an OpenPOWER server that cut across four major open source community initiatives (OpenStack, Open Compute, OpenPOWER, and, of course, Linux). + +This past spring, at the Open Compute and OpenPOWER annual summits, Rackspace offered up our vision for a more powerful cloud, and shared our “Barreleye” server concept design. (We chose to name it after the barreleye fish because as you can see from the photo above, the fish [has a transparent head](https://en.wikipedia.org/wiki/Barreleye). Get it? It’s open!) + +\[caption id="attachment\_2039" align="aligncenter" width="625"\]![Barreleye_26](images/Barreleye_26-1024x754.jpg) Alpha release of Barreleye server package; lid removed, drive tray extended.\[/caption\] + +Since then, we’ve worked closely with our partners — [Avago](http://www.avagotech.com), [IBM](http://www.ibm.com), [Mellanox](http://www.mellanox.com), [PMC](http://pmcs.com), [Samsung](http://www.samsung.com) — to make that concept a reality. The first Barreleye servers came online in July, in China. In August, we shipped engineering samples to our San Antonio lab and to our development partners. + +Two weeks ago, we showed Barreleye off in its first public forum: a Rackspace-hosted Open Compute engineering workshop. + +\[caption id="attachment\_2041" align="aligncenter" width="625"\]![OCP Workshop](images/OCP-Workshop-1024x768.jpeg) Attendees at last month’s engineering workshop check out the Barreleye, the world’s first Open Compute server with an OpenPOWER chip.\[/caption\] + +\[caption id="attachment\_2042" align="aligncenter" width="625"\]![Barreleye_08](images/Barreleye_08-1024x683.jpg) L to R, bottom and top views of “Turismo” 10-core/80 hardware thread OpenPOWER processor.\[/caption\] + +Our next batch of samples will arrive in November, with more systems going to more partners shortly thereafter. We hope to submit a draft of Barreleye’s Open Compute specification before year-end, and aim to put Barreleye in our datacenters for OpenStack services early next year. Check out some close-ups, below: + +\[caption id="attachment\_2050" align="aligncenter" width="625"\]![Barreleye_07](images/Barreleye_07-1024x683.jpg) Barreleye portable “lunchbox” power supply; enables benchtop testing for those without an open rack.\[/caption\] + +\[caption id="attachment\_2047" align="aligncenter" width="625"\]![Barreleye_03](images/Barreleye_03-1024x683.jpg) Barreleye hot-swappable drive tray with 15 SSDs installed.\[/caption\] + +\[caption id="attachment\_2044" align="aligncenter" width="625"\]![Barreleye_10](images/Barreleye_10-879x1024.jpg) Alpha release of Barreleye motherboard (top) and customizable IO board (bottom).\[/caption\] + +Barreleye has the capacity for phenomenal virtual machine, container, and bare metal compute services. Further out on the horizon, we’re looking forward to Barreleye’s successor on the next generation of OpenPOWER chips, and CAPI-optimized services. + +Speaking of CAPI, the [OpenPOWER Foundation](https://openpowerfoundation.org/) blog is running a series on CAPI, which enables solution architects to improve system-level performance. IBM’s Sumit Gupta writes about [accelerating business applications with CAPI](https://openpowerfoundation.org/blogs/capi-drives-business-performance/), while Brad Brech weighs in on the benefits of [using CAPI with Flash](https://openpowerfoundation.org/blogs/capi-and-flash-for-larger-faster-nosql-and-analytics/). + +It’s been an incredible journey thus far. Here are some observations we’ve made along the way: + +- Turns out bugs in open source firmware — even complicated bugs that span many elements — tend to get fixed much faster. The code and functions are not hidden, meaning everyone can get involved. +- BIOS features. Once you’ve worked with [OpenPOWER’s BIOS](https://github.com/open-power/), you’ll want it on every server. +- Even in its first year, [OpenBMC](https://github.com/facebook/openbmc) is showing great potential. Are you in DevOps? Want more control? You’ll get it with OpenBMC. Keep an eye on it. +- Linux distribution, device driver and adapter firmware support continue to expand. At this rate, it will not be long before mainstream server adapter products are as easy to plug into OpenPOWER as any other server. +- People are skeptical until they see it, touch it, log into it. Once they do, they’re pretty excited with Barreleye’s very impressive specs, including: + - The memory bandwidth — around 200 GiB/sec + - The clock speed — 3.1 – 3.7 GHz, turbo between 3.6 – 4.1 + - The cache — more than 200 MiB + - The CPU threads — 128 – 192, utilities like “top” and “nmon” show a CPU for every thread. Even on large displays, they run right off the edge of the screen. + +When we announced [our participation in OpenPOWER](http://blog.rackspace.com/openpower-opening-the-stack-all-the-way-down/) last year, we said, “We want our systems open, all the way down. This is a big step in that direction.” + +Many big steps already taken. More big steps to go. All towards a more open future. We get there faster, together. + +* * * + +**_![sullivan_aaron_03](images/sullivan_aaron_03-150x150.jpg)About Aaron Sullivan_** + +Aaron Sullivan is a Senior Director and Distinguished Engineer at Rackspace, focused on infrastructure strategy. + +Aaron joined Rackspace's Product Development organization in late 2008, in an engineering role, focused on servers, storage, and operating systems. He moved to Rackspace’s Supply Chain/Business Operations organization in 2010, mostly focused on next generation storage and datacenters. He became a Principal Engineer during 2011 and a Director in 2012, supporting a variety of initiatives, including the development and launch of Rackspace’s first Open Compute platforms. He recently advanced to the role of Senior Director and Distinguished Engineer. These days, he spends most of his time working on next generation server technology, designing infrastructure for Rackspace’s Product and Practice Areas, and supporting the growth and capabilities of Rackspace’s Global Infrastructure Engineering team. He also frequently represents Rackspace as a public speaker, writer, and commentator. + +He was involved with Open Compute since its start at Rackspace. He became formally involved in late 2012. He is Rackspace’s lead for OCP initiatives and platform designs. Aaron is serving his second term as an OCP Incubation Committee member, and sponsors the Certification & Interoperability (C&I) project workgroup. He supported the C&I workgroup as they built and submitted their first test specifications. He has also spent some time working with the OCP Foundation on licensing and other strategic initiatives. + +Aaron previously spent time at GE, SBC, and AT&T. Over the last 17 years, he’s touched more technology than he cares to talk about. When he’s not working, he enjoys reading science and history, spending time with his wife and children, and a little solitude. diff --git a/content/blog/openpower-outside-of-the-data-center.md b/content/blog/openpower-outside-of-the-data-center.md new file mode 100644 index 0000000..f81650d --- /dev/null +++ b/content/blog/openpower-outside-of-the-data-center.md @@ -0,0 +1,22 @@ +--- +title: "Bringing OpenPOWER Outside of the Data Center" +date: "2016-11-30" +categories: + - "blogs" +--- + +_By Timothy Pearson, Raptor Engineering_ + +![talos-opf_png_project-body](images/talos-opf_png_project-body-300x221.jpg) + +Ever wish you could use something other than an insecure x86 or low-powered ARM machine to communicate with the OpenPOWER server sitting in your data center? Wish no longer! Meet the Talos™ workstation-class ATX mainboard, built on OpenPOWER and bringing the security and open systems advantages of POWER8 out of the data center and onto your desk. OpenPOWER-member Raptor Engineering is committed to making owner-controllable, Libre-friendly systems available for engineers, programmers, data analysts, as well as anyone else who needs [serious computing power](https://www.crowdsupply.com/raptor-computing-systems/talos-secure-workstation#unreal-engine-4-on-openpowertm), [security](https://www.crowdsupply.com/raptor-computing-systems/talos-secure-workstation#the-state-of-general-purpose-computing), and [flexibility](https://www.crowdsupply.com/raptor-computing-systems/talos-secure-workstation#features-specifications) all in the same machine. The OpenPOWER Foundation provides access to the only modern, performant architecture and shipping CPU that meets these criteria—so OpenPOWER is a perfect fit for our Talos™ machines. Talos™ also shines in storage servers and network processing, where the large number of PCIe 3.0 slots combined with POWER8's I/O performance provides both configuration flexibility and high performance. + +## Meet Talos + +The [Talos mainboard](https://www.crowdsupply.com/raptor-computing-systems/talos-secure-workstation#features-specifications) hosts a single socketed POWER8 processor and two Centaur DDR3 memory buffers on a standard ATX mainboard.  It includes significant I/O and memory expansion capabilities, including 8 DDR3 ECC memory slots and 7 PCIe slots (56 total PCIe 3.0-capable lanes!), along with the wide variety of on-board peripherals expected in a workstation class mainboard.  Unlike existing OpenPOWER machines, Raptor Engineering has gone one step further and is using reprogrammable logic devices (FPGAs) that have an open toolchain available, making Talos™ completely self-hosting! If you need to modify any aspect of the Talos™ firmware or reprogrammable logic, you can completely recompile and resynthesize the firmware using your Talos™ machine instead of having to fall back to an x86 or Microsoft® Windows® environment.  We have also been instrumental in securing the release of the SBE/Winkle engine code, and as a result the Talos™ mainboard is completely open down to the lowest level firmware and machine schematics, making it an ideal research and development platform to explore next-generation technologies such as CAPI. + + ![talos](images/Talos.jpg) Thanks to IBM's support of Linux on OpenPOWER, Talos™ is ready to run using a variety of modern Linux distributions. We have tested and qualified a wide variety of hardware on our POWER8 SDV for use with Talos™, including GPUs, Mellanox Infiniband devices, and much more. Thanks to POWER8's little endian support, most Linux drivers simply work, and those few that exhibit minor issues due to faulty x86-centric coding are usually trivial to fix. We also plan to work with BSD developers to port one or more of the BSDs to OpenPOWER in support of Talos™, opening the world beyond x86 even wider. + +## Learn More About Talos + +Visit the [Talos product page](https://www.crowdsupply.com/raptor-computing-systems/talos-secure-workstation/) to watch videos, read white papers, and learn about how the new Talos workstation brings the data center to your desk! diff --git a/content/blog/openpower-partners-and-experts-host-an-introduction-to-power-at-iem-kolkata.md b/content/blog/openpower-partners-and-experts-host-an-introduction-to-power-at-iem-kolkata.md new file mode 100644 index 0000000..cdcdc60 --- /dev/null +++ b/content/blog/openpower-partners-and-experts-host-an-introduction-to-power-at-iem-kolkata.md @@ -0,0 +1,34 @@ +--- +title: "OpenPOWER Partners and Experts Host an Introduction to POWER at IEM, Kolkata" +date: "2019-09-24" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "machine-learning" + - "openpower-foundation" + - "power-systems" + - "ibm-watson" + - "ibm-power" + - "institute-of-engineering-management" + - "kolkata" +--- + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/) + +![](images/Kolkata-1.jpg) + +Earlier this month, the [Institute of Engineering & Management](http://iem.edu.in/) in Kolkata, India welcomed OpenPOWER experts and novices to a full day workshop to discuss how to use IBM POWER systems for big data analysis and artificial intelligence applications. + +More than 200 participants from around Kolkata attended; attendees included not only students and faculty from local universities, but also professionals from the state’s Ministry of Higher Education. + +After an introduction to the OpenPOWER platform, partners and researchers across industries shared their experience using IBM POWER to support genomics programs, image analytics and more. [Professor Arghya Kusum Das](https://www.linkedin.com/in/arghya-kusum-das-567a4761/) from the University of Wisconsin at Platteville, for example, walked attendees through how he utilizes POWER to handle terabytes of metagenomic data (see the white paper [here](https://www.lsu.edu/mediacenter/docs/LSU-IBM_POWER8_GenomeBenchmark.pdf)). + +![](images/Kolkata-2-1024x576.jpg) + +Another crowd-favorite session was the Introduction to Watson Machine Learning Acceleration (WML-A). Participants got hands-on experience with WML-A through [JupyterLab Notebook](https://jupyter.org/) . + +Please see the full workshop on [YouTube](https://m.youtube.com/watch?v=GYH69Yr75h4). + +If you are interested in attending a similar session, we are looking to schedule more worldwide, including in India and the U.S. Be sure to also check out [OpenPOWER Summit 2019 Europe](https://events.linuxfoundation.org/events/openpower-summit-eu-2019/), happening next month (October 31-November 1) in Lyons, France! diff --git a/content/blog/openpower-pcie.md b/content/blog/openpower-pcie.md new file mode 100644 index 0000000..925bdcb --- /dev/null +++ b/content/blog/openpower-pcie.md @@ -0,0 +1,39 @@ +--- +title: "How OpenPOWER Members Created the World’s First Production-ready PCIe Gen4 NVM Express System" +date: "2018-03-21" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "hpc" + - "fpga" + - "rackspace" + - "xilinx" + - "openpower-foundation" + - "high-powered-computing" + - "power9" + - "nvm-express" + - "nvme" + - "eideticom" +--- + +## What is NVM Express (NVMe)? + +NVMe is a new and increasingly popular protocol for interfacing with Solid State Drives (SSDs) in enterprise, data-center and HPC markets. NVMe has a broad eco-system that is capable of running on OpenPOWER systems. + +**How it works:** NVMe uses PCIe to connect the CPU to the SSDs. Eideticom deployed its NVMe-based accelerator, NoLoad™ product on top of Xilinx’s FPGA technology on a production ready FPGA acceleration card. The acceleration card ran inside a production ready OpenPOWER server from Rackspace, thus creating the world’s first PCIe Gen4 NVM Express production ready system. + +IBM’s POWER9 is the first production CPU with PCIe Gen4 IO. Because of this, the data bandwidth is nearly doubled compared to the PCIe Gen3. + +> _“We are excited to incorporate Eideticom’s storage acceleration and PCIe Gen4 technology in our Barreleye G2 server,” said Adi Gangidi, system design engineer at Rackspace. “Accelerator IP enablers like Eideticom are helping drive the widespread data center adoption of a new and unmatched class of IO.”_ + +## What OpenPOWER members collaborated on this project? + +Eideticom, IBM, Rackspace and Xilinx worked together to create the world’s first PCI Gen4 NVM Express production ready system. This collaboration enabled a new generation of storage performance for the OpenPOWER eco-system based on open standards at PCIe Gen4 speeds. + +> _“The OpenPOWER Foundation has been aggressively adopting PCIe Gen4 because we see the need for faster storage, network and compute” said Bryan Talik, President of the OpenPOWER Foundation “OpenPOWER has already demonstrated PCIe Gen4 support with IBM, Mellanox, and Xilinx and we are delighted that Eideticom can now offer fast storage and compute via NVMe over that PCIe Gen4 ecosystem.”_ + +Click here for more information on the [world’s first PCI Gen4 NVM Express production ready system](http://www.eideticom.com/blog/27-nvm-express-over-pcie-gen4-baby.html). + +[![](images/Eideticom-1024x479.png)](https://openpowerfoundation.org/wp-content/uploads/2018/03/Eideticom.png) diff --git a/content/blog/openpower-ready-solutions.md b/content/blog/openpower-ready-solutions.md new file mode 100644 index 0000000..adf8b01 --- /dev/null +++ b/content/blog/openpower-ready-solutions.md @@ -0,0 +1,18 @@ +--- +title: "OpenPOWER Ready™ Solutions Expand Growing OpenPOWER Ecosystem" +date: "2016-04-06" +categories: + - "blogs" +--- + +_By Jeff Brown, ‎Distinguished Engineer, Emerging Product Development at IBM_ + +![OPS_08_MG_3501](images/OPS_08_MG_3501-692x1024.jpg) + +Continuing the OpenPOWER Foundation’s momentum, we’ve launched the OpenPOWER Ready™ program at the [OpenPOWER Summit](https://openpowerfoundation.org/openpower-summit-2016/) this week in San Jose. This program empowers both members and non-members to embrace and promote their OpenPOWER technology. This designation will strengthen our ecosystem of products and solutions built upon IBM’s POWER architecture, creating additional confidence for developers, builders and customers that use OpenPOWER Ready hardware and software. + +OpenPOWER Ready was designed to indicate that a product or solution has met a minimum set of criteria set forth by the Foundation. The OpenPOWER Ready definition and criteria was a collaboration of several of the Foundation’s work groups and will evolve over time under the direction of a new OpenPOWER Ready work group. Part of this criteria centers around whether a product or solution is interoperable with other OpenPOWER Ready products, reinforcing the collaborative nature of the OpenPOWER ecosystem. Both OpenPOWER members and non-members can apply for the mark, which can be designated for both qualifying hardware and software. We’ve outlined the full set of OpenPOWER Ready criteria on the [OpenPOWER Foundation website](http://staging.openpowerfoundation.org/?resource_lib=openpower-ready-definition-and-criteria). + +We are excited to continue to transform the data center with the OpenPOWER Ready journey. In addition to increasing confidence in existing members’ OpenPOWER-based products, we hope to inspire non-members with OpenPOWER Ready innovations to join the OpenPOWER Foundation and further grow our collaborative, open ecosystem. It is our vision that companies and other entities utilizing this mark will further solidify OpenPOWER technology as a superior alternative to other server solutions. + +To see the first set of products designated OpenPOWER Ready, visit the [OpenPOWER Ready homepage](https://openpowerfoundation.org/technical/openpower-ready/). diff --git a/content/blog/openpower-rebel-alliance.md b/content/blog/openpower-rebel-alliance.md new file mode 100644 index 0000000..54170b9 --- /dev/null +++ b/content/blog/openpower-rebel-alliance.md @@ -0,0 +1,52 @@ +--- +title: "OpenPOWER: The Rebel Alliance of the Industry" +date: "2015-12-22" +categories: + - "blogs" +tags: + - "openpower" + - "featured" + - "ecosystem" + - "star-wars" + - "alliance" +--- + +_By Sam Ponedal, Social Strategist for OpenPOWER_ + +# [![Social-Tiles-Rebel-Alliance_Draft03_02](images/Social-Tiles-Rebel-Alliance_Draft03_02.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/Social-Tiles-Rebel-Alliance_Draft03_02.jpg) + +# Episode 8: OpenPOWER + +_OpenPOWER_ + +_A dark empire has spread across the Compute Galaxy. Driven by a zealous belief in an antiquated law, the empire seeks to place the universe's IT practitioners under their repressive rule._ + +_But a new force is rising. Seeking to define a new approach to hardware based on Open Acceleration and collaboration, the OpenPOWER Foundation's 170+ member ecosystem is challenging the empire._ + +_Driven by open innovation, the OpenPOWER Foundation is achieving new levels of performance..._ + +(To see this crawl as it's meant to be viewed, [click here to StarWars.com](http://www.starwars.com/games-apps/star-wars-crawl-creator/?cid=490abec6c54e912a0a83388816edac9aa3adb231)) + +OK, I had to get that out of my system. If you’re like me, this is a week that you’ve been waiting on for a decade: the release of the next Star Wars film, The Force Awakens. Everyone is talking about it, arguably it’s one of the most anticipated events in cinema history. + +As if I didn’t have enough to be excited about, at Supercomputing 2015 in Austin, TX last month, analyst Dan Olds coined OpenPOWER the “Rebel Alliance of the industry,” and I couldn’t agree more. Like a Wookie in a China shop this thought was bursting to get out, so I wrote it all down and examined the ways that OpenPOWER is like the Rebel Alliance. + +First off what’s one of the most iconic symbols of the Rebel Alliance? That’s right, the X-Wing Fighter. + +It’s versatile, powerful, and always gets the job done. When the Rebels need to take down the Death Star, they call on a squadron of X-Wings to target the exhaust port. For OpenPOWER, the X-Wing is the POWER8 processor. Its 4X thread per core improvement over x86 is reminiscent of the X-Wing’s four wings. Couple that with POWER8’s performance benchmarks showing greater than 20% performance over x86 and it’s clear that POWER8 is the workhorse of the OpenPOWER Rebel Alliance. This begs the question: if POWER8 is an X-Wing, what’s x86? That’s an easy one: a TIE fighter, and any Star Wars fan knows what happens when a TIE fighter and an X-Wing go at it. + +IBM believes the future of HPC and Enterprise data centers is based on an accelerated data center architecture.  This architecture consists of accelerated computing, accelerated storage, and accelerated networking.   There are new accelerators, storage, and networking devices coming from several technology companies. + +Accelerators are all about speed, and if you ever need to make the Kessel run you know that you need the fastest ship in the galaxy: the Millennium Falcon. Accelerator technology takes what the industry considered "fast" and jumps it to lightspeed. Coupled with the POWER8 processor, an accelerator can outrun any task, or Imperial Star Destroyer, thrown at it. Just don't forget to check the negative power couplings. + +But one of OpenPOWER’s strongest assets is its developers, who embody the ideals of open and collaboration, our own Jedi Order. Just as the Jedis are the defenders of truth and justice in the galaxy, so are our developers the custodians of innovation in an open hardware and software ecosystem. And like the Jedi Order, we know that it is important to train and provide tools to the next generation of OpenPOWER Developer so that they can hone their skills within the ecosystem. + +That’s why we recently expanded Supervessel, OpenPOWER’s development cloud, to feature new GPU acceleration as a service, deep learning frameworks, and access to cloud-based FPGAs. Add to that our collaborations with the University of Texas’s TACC and Oregon State University’s Open Source Lab to offer free development resources available to anyone worldwide. + +But the best part of the Rebel Alliance? That it is open to anyone seeking refuge and asylum from the Empire, and the same is true for OpenPOWER. Our collaborative ecosystem is welcoming to all joiners, and we maintain an open door for people seeking to revolutionize the data center through open hardware and open software. If you would like to get involved in OpenPOWER, [read more about the different levels of membership and engagement](https://openpowerfoundation.org/get-involved/). And for the latest news be sure to follow us on [Twitter](https://twitter.com/OpenPOWERorg), [Facebook](https://www.facebook.com/openpower/), and [LinkedIn](https://www.linkedin.com/company/openpower-foundation/). + +Thank you, and may the Open Source be with you. + +* * * + +_[![_Y1O5015](images/Y1O5015-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/Y1O5015.jpg)Sam Ponedal is an IBM Social Strategist responsible for OpenPOWER's social presence. He is an avid tech enthusiast, geek, and nerd who uses puns way more than necessary in a professional environment. You can follow him on [Twitter](https://twitter.com/Sam_Ponedal) to see his latest._ diff --git a/content/blog/openpower-research-facility-iit-bombay.md b/content/blog/openpower-research-facility-iit-bombay.md new file mode 100644 index 0000000..bbf5994 --- /dev/null +++ b/content/blog/openpower-research-facility-iit-bombay.md @@ -0,0 +1,58 @@ +--- +title: "OpenPOWER Helps India Advance National Supercomputing Mission with new Research Facility at IIT Bombay" +date: "2016-09-14" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Professor P.S.V. Nataraj, Systems and Control Engineering Group, IIT Bombay_ + +![iitblogo](images/iitblogo-300x287.png) + +During my visit to IBM, Bangalore in April 2014, the idea for having a collaboration between the OpenPOWER Foundation and IIT Bombay (IITB) was born. The OpenPOWER Foundation’s representative, Ganesan Narayanasamy, presented the genesis, objectives, and activities of the Foundation to Prof. Nataraj, and from this conversation IIT Bombay joined the Foundation as an academic member. + +In collaboration with IBM, IIT Bombay developed a research proposal that was submitted for the IBM SUR award.  In Sept 2015, Prof. Paluri S. V. Nataraj received the IBM SUR award for his research project “_Development of parallel algorithms and software library for constrained global optimization of polynomial problems using the Bernstein polynomial approach_.” As a part of the SUR award, computing equipment was donated by OpenPOWER partners IBM, NVIDIA, and Mellanox to IITB. The OpenPOWER Research Facility (OPRF) comprising these equipment was setup, and hosted by an exclusive data centre in IITB. + +The facility officially [opened](https://www-03.ibm.com/press/in/en/pressrelease/50367.wss) on August 17, 2016. + +### **Building India’s National Supercomputing Mission through OpenPOWER Collaboration** + +The National Supercomputing Mission aims to build a culture of supercomputing for solving complex R&D problems and designing solutions addressing various country specific requirements for scientific, strategic and societal applications. To achieve this aim, it is important that various kind of supercomputing platforms be setup and made available to users across the country. + +OPRF is the only OpenPOWER based supercomputing facility in India which gives access to researchers and academicians all over India. By providing this kind of access, efforts are directed towards ensuring that (a) researchers gain great speed-ups and other benefits of the POWER8 architecture, and (b) academicians experiment and gain insight into the open platform. + +Furthermore, as OpenPOWER is a community-driven initiative, we at OPRF ensure that knowledge and infrastructure are not restricted to a few, but is made available to everyone who aims to contribute to make the society a better place. + +### **The Equipment** + + The OpenPOWER Research Facility at IIT Bombay is located in the Systems and Control Engineering Department. + +It is placed in an IBM 42U Rack, and consists of: + +2 x OpenPOWER based 8247-42L servers, each having: + +- 20 cores of 3.42 GHz each (POWER 8 ) +- 256 GB RAM +- 8 TB raw capacity level SATA hard disks +- 2 x Tesla K80 GPUs each with 2 x 2496 GPU cores and 2 x 12GB VRAM +- 1 x Coherent Accelerator Processor Interface Cards + +1 x OpenPOWER-based 8247-21L with + +- 10 cores of 3.75 GHz each (capable of creating 40 POWER threads) +- 128 GB RAM +- 8 x 1 TB raw capacity hard disks + +These machines are interconnected with a Mellanox switch of 12 ports with 56 Gigabit throughput. + +### **Get Involved** + +Academicians and researchers are welcome to avail the OpenPOWER Research Facility for their education and research activities. Access to this facility is currently given to users across India, on request. The registration form is available at [http://oprfiitb.in/access\_request\_form](http://oprfiitb.in/access_request_form) + +* * * + +![psvn_pic](images/psvn_pic.jpg) + +_Paluri S. V. Nataraj is a Professor of Systems and Control Engg Group at IIT Bombay. He obtained a Ph.D. from IIT Madras in process dynamics and control in 1987. He then worked in the CAD center at IIT Bombay, India for about one and half years before joining the faculty of the Systems and Control Engineering Group at IIT Bombay in 1988. He has been involved in teaching and research for about 28 years at IIT Bombay. His current research interests are in the areas of Global Optimization, Parallel Computing, Reliable Computing, and Robust Control._ diff --git a/content/blog/openpower-summit-2016-2.md b/content/blog/openpower-summit-2016-2.md new file mode 100644 index 0000000..2d8408b --- /dev/null +++ b/content/blog/openpower-summit-2016-2.md @@ -0,0 +1,28 @@ +--- +title: "OpenPOWER Foundation Revolutionizes the Data Center at Summit 2016!" +date: "2016-04-06" +categories: + - "blogs" +--- + +_By John Zannos, Chairman, OpenPOWER Foundation_ + +![OpenPOWER_Summit2016_logo2](images/OpenPOWER_Summit2016_logo2-1024x370.jpg) + +As we reach the pinnacle of our second [OpenPOWER Summit](https://openpowerfoundation.org/openpower-summit-2016/) in San Jose, I want to take a minute to recognize all of our members who have contributed to the momentum and growth we’ve seen since we gathered here last year for our inaugural event. + +I also want to thank the members of the OpenPOWER Foundation Board for electing me as the new Chairman and electing Calista Redmond of IBM as President. Calista and I follow the success of the Foundation’s former Chair and founding OpenPOWER member, Gordon MacKean of Google, and former President and founding OpenPOWER member, Brad McCredie of IBM. My thanks to Gordon and Brad. + +I’m happy to say that our membership has grown and surpassed the two hundred mark. It’s not just our membership that’s expanding, though – it’s the entire OpenPOWER ecosystem. We’re seeing more hardware and software innovations being developed and launched into the market, OpenPOWER work groups are building the guidelines that drive innovation, and there’s a growing number of developers working on OpenPOWER. + +It’s clear to see that companies around the world are interested in collaborating to create innovative products and solutions that meet the needs of the modern data center. The market continues to ask for technology choice and openness. OpenPOWER is supplying collaborative innovation by pulling together a community that is working and innovating together. + +Today at Summit, our members [announced](https://openpowerfoundation.org/press-releases/openpower-foundation-reveals-new-servers-and-big-data-analytics-innovations/) more than 50 new OpenPOWER-based innovations, many of which were developed in collaboration with fellow Foundation members. The new innovations showcase the Foundation’s commitment to CAPI accelerator technology and building new solutions for high performance computing and cloud deployments. These are real examples of the deep innovation that results from open collaboration. The full list of member solutions is impressive, as you can see by checking out our [OpenPOWER fact sheet](https://openpowerfoundation.org/wp-content/uploads/2016/04/HardwareRevealFlyerFinal.pdf). + +We’re not just focused on developing new solutions. We also remain committed to our OpenPOWER developer ecosystem. Today, we introduced the OpenPOWER Ready™ seal, enabling companies to validate their hardware and software solutions against self-test guidelines from the Foundation. We hope that OpenPOWER Ready will help grow our ecosystem, providing added confidence for developers, builders and customers. + +We also announced the first-ever [OpenPOWER Developer Challenge](http://bit.ly/236URpB) to encourage developers to tap the power of open and show us what they can create. + +There are several exciting things planned at OpenPOWER Summit over the next few days. You can find the full schedule of OpenPOWER Summit events [on our website](https://openpowerfoundation.org/openpower-summit-2016/). If you’re onsite, we invite you to stop by the OpenPOWER Pavilion. And don’t forget about the renowned OpenPOWER Ice Bar tonight from 5-7 p.m. PT – it’s a fan favorite. + +Thank you, and please stop by and tell us what you are thinking. diff --git a/content/blog/openpower-summit-2016-vive-la-revolution.md b/content/blog/openpower-summit-2016-vive-la-revolution.md new file mode 100644 index 0000000..617c348 --- /dev/null +++ b/content/blog/openpower-summit-2016-vive-la-revolution.md @@ -0,0 +1,38 @@ +--- +title: "OpenPOWER Summit 2016: Vive la Révolution!" +date: "2016-02-09" +categories: + - "blogs" +tags: + - "featured" + - "openpower-summit" +--- + +_By Calista Redmond, President, OpenPOWER Foundation_ + +[![OpenPOWER_Summit2016_logo2](images/OpenPOWER_Summit2016_logo2-1024x370.jpg)](https://openpowerfoundation.org/openpower-summit-2016/)OpenPOWER Summit is coming up April 5-8 in San Jose and we want you there! In fact, we're making it easier than ever for you to attend by offering [20% off your registration fee](http://bit.ly/1KuWHLD) by using our discount code. Just input [**OPFSUMMIT2016** during checkout](http://bit.ly/1KuWHLD) to get up to $300 off the cost of admission! + +Can't join us in-person? Be sure to follow us on Twitter at [@OpenPOWERorg](http://twitter.com/openpowerorg) and use the hashtag [#OpenPOWERSummit](https://twitter.com/search?q=%23OpenPOWERSummit&src=typd) to get the latest from our on-the-ground social reporters and join the conversation! + +The OpenPOWER Foundation's model of [open collaboration between organizations](https://www.technologyreview.com/s/544321/competing-billion-dollar-tech-companies-are-joining-forces-heres-why/) has flipped the script and spawned an incredibly engaged community. We’re not adjusting the dial, we’re leveling the playing field and changing the game for both open software and hardware. This is a revolution for our industry. + +Hear from vendors like Mellanox, NVIDIA, Tyan, Nallatech, and IBM on their latest hardware innovations. See how MSPs like Rackspace, Arrow ECS, and Redis Labs are bringing OpenPOWER into the cloud. Get hands on with Canonical and Ubuntu to experience how OpenPOWER is built upon the leading open source operating system, Linux, and how we’re embracing and practicing the ideals of open source. + +We invite you to join us at the [OpenPOWER Summit](https://openpowerfoundation.org/openpower-summit-2016/) where you can: + +- see over 50 presenters and speakers share their OpenPOWER-driven innovation including talks from end users as well as hardware and software innovators, +- visit a show floor of demos to understand OpenPOWER innovations in action, +- network with your peers at the OpenPOWER Pavilion Theater to embrace the open spirit of collaboration, +- join  our ISV Roundtable to hear from cross-industry leaders about how OpenPOWER is accelerating their business, +- get hands on with CAPI and learn from OpenPOWER’s brightest engineers during our CAPI Lab, and +- more workshops and speakers to be announced in the coming weeks! + +And of course, grab a drink from our famous OpenPOWER Ice Bar! + +[![Ice bar Pic](images/Ice-bar-Pic-1024x683.jpg)](https://openpowerfoundation.org/openpower-summit-2016/) + +To attend the Summit, [register here using the OpenPOWER Member 20% off discount code](http://bit.ly/1KuWHLD) **OPFSUMMIT2016**. + +And be sure to follow us on [Twitter](https://twitter.com/OpenPOWERorg), [Facebook](https://www.facebook.com/openpower/), [LinkedIn](https://www.linkedin.com/company/openpower-foundation), and [Google+](https://plus.google.com/117658335406766324024) to stay up to date with the latest news and use the hashtag [#OpenPOWERSummit](https://twitter.com/search?src=typd&q=%23OpenPOWERSummit) to join the conversation. + +**Vive la Révolution!** diff --git a/content/blog/openpower-summit-announces-revolutionizing-the-data-center-speaker-lineup.md b/content/blog/openpower-summit-announces-revolutionizing-the-data-center-speaker-lineup.md new file mode 100644 index 0000000..1669675 --- /dev/null +++ b/content/blog/openpower-summit-announces-revolutionizing-the-data-center-speaker-lineup.md @@ -0,0 +1,32 @@ +--- +title: "OpenPOWER Summit Announces “Revolutionizing the Data Center” Speaker Lineup" +date: "2016-02-09" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +### **Summit to feature 50+ member presentations, keynote speakers, technology demos and more** + +  + +**SAN JOSE, Calif., February 09, 2016 –** Today, the [OpenPOWER Foundation](https://openpowerfoundation.org/) announced the lineup of speakers for the [OpenPOWER Summit 2016](https://openpowerfoundation.org/openpower-summit-2016/), taking place April 5-8 at [NVIDIA’s GPU Technology Conference](http://www.gputechconf.com/) (GTC) at the San Jose Convention Center. The Summit will bring together dozens of technology leaders from the OpenPOWER Foundation to showcase the latest advancements in the OpenPOWER ecosystem, including collaborative hardware, software and application developments – all designed to revolutionize the data center. + +At the event, attendees will have access to more than 50 member presentations and will hear from newly appointed OpenPOWER leadership, including Chairman John Zannos and President Calista Redmond. + +The OpenPOWER Summit 2016 keynote speakers include: + +- OpenPOWER Chairman John Zannos will present “Building OpenPOWER momentum” +- OpenPOWER President Calista Redmond will deliver the opening keynote, “OpenPOWER: Revolution in the Data Center and Ecosystems Solutions.” +- Former OpenPOWER Foundation President Brad McCredie will discuss “OpenPOWER and the Roadmap Ahead” +- OpenPOWER Foundation Technical Steering Chair Jeff Brown will share an update on OpenPOWER’s most recent work group accomplishments, initiatives and next steps + +Member presentations will be delivered by IBM, Rackspace, Google, NVIDIA, Mellanox, PMC Sierra, Tyan, GlobalFoundries, NEC, PGI, Brocade, Bluebee, StackVelocity, E4 Computer Engineering, STFC Daresbury Laboratory, Xilinx, Nallatech, Jülich Supercomputing Centre, Algo-Logic Systems, Tsinghua University, LSU, Lund University, Semptian and more. Additional presentations will take place in the OpenPOWER exhibitor pavilion theater, where attendees will also have access to member technology demonstrations. The current list of presenters and abstracts can be found [here](https://openpowerfoundation.org/openpower-summit-2016/), and additional speakers will be revealed in the coming weeks. + +Following the success of the [inaugural 2015 Summit](https://openpowerfoundation.org/press-releases/openpower-summit-showcases-altera-fpga-acceleration-technology/), also held at GTC, the OpenPOWER Foundation has grown to more than 175 members worldwide collaborating on more than 100 development projects and 1,900 applications. + +To register for the OpenPOWER Summit, please visit [www.gputechconf.com/attend](http://www.gputechconf.com/attend). + +To get the latest updates about the Summit and other OpenPOWER Foundation news, follow the Foundation on [LinkedIn](https://www.linkedin.com/groups/OpenPOWER-Foundation-7460635), [Facebook](https://www.facebook.com/openpower) , [Twitter](https://twitter.com/openpowerorg) and [Google+](https://plus.google.com/117658335406766324024/posts) with the #OpenPOWERSummit hashtag. diff --git a/content/blog/openpower-summit-europe-collaboration.md b/content/blog/openpower-summit-europe-collaboration.md new file mode 100644 index 0000000..4ce402e --- /dev/null +++ b/content/blog/openpower-summit-europe-collaboration.md @@ -0,0 +1,28 @@ +--- +title: "OpenPOWER Summit Europe Provides Broad Collaboration Opportunity" +date: "2018-10-15" +categories: + - "blogs" +tags: + - "featured" +--- + +By [Yaroslav D. Sergeyev,  Ph.D., D.Sc., D.H.C.,](http://wwwinfo.dimes.unical.it/~yaro/) President, International Society of Global Optimization and Head of Numerical Calculus Laboratory, University of Calabria + +I recently attended the [OpenPOWER Summit Europe](https://openpowerfoundation.org/summit-2018-10-eu/) in Amsterdam. + +Organized by the OpenPOWER Foundation, a group founded in 2013, the Summit is an excellent platform for exchanging ideas and enabling foundation member organizations and data centers to rethink their approach to technology and ways to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. The systems under consideration include those for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, platform optimization for SW appliances, or advanced hardware technology exploitation. + +The event had a strong technical emphasis and gave many developers, engineers, executives, decision makers and researchers the opportunity to learn from and work together over the course of two days. Presentation topics included PCIe Gen4, CAPI, OpenCAPI, Linux, OpenBMC, GPU, FPGA, I/O, Power Architecture, performance optimization, system management and more. + +I found the sessions below to be particularly interesting. [Click here to view](https://openpowerfoundation.org/summit-2018-10-eu/) presentations from these sessions and more. + +- Executive Remarks, Artem Ikoev +- Why Innovation Matters: The Power to Save Power, Fabrizio Magugliani +- FPGA-OpenPOWER Academic Work Group, Ganesan Narayanasam +- TAU for Accelerating AI Applications, Sameer Shende +- FPGA-OpenCAPI and its Roadmap, Myron Slota + +The AI4Good and OpenBMC Hackathons held during the Summit were well attended and attracted young developers. + +I’m already looking forward to the next opportunity to meet, learn from and collaborate with OpenPOWER members in Europe. diff --git a/content/blog/openpower-summit-north-america-2019-counting-the-stars-neural-networks-for-star-classification.md b/content/blog/openpower-summit-north-america-2019-counting-the-stars-neural-networks-for-star-classification.md new file mode 100644 index 0000000..3169da1 --- /dev/null +++ b/content/blog/openpower-summit-north-america-2019-counting-the-stars-neural-networks-for-star-classification.md @@ -0,0 +1,26 @@ +--- +title: "OpenPOWER Summit North America 2019: Counting the Stars - Neural Networks for Star Classification" +date: "2019-09-19" +categories: + - "blogs" +tags: + - "openpower-summit" + - "openpower-foundation" + - "openpower-summit-north-america" + - "atos" + - "counting-the-stars" + - "gaia-satellite" + - "university-of-geneva" +--- + +By: Hugh Blemings, Executive Director, OpenPOWER Foundation + +![](images/Counting-the-Stars.png) + +The European Space Agency launched the [Gaia Satellite](https://sci.esa.int/web/gaia) in 2013. The satellite contains a 900 million-pixel camera and it takes a photograph of about 2 million stars every hour. As [Atos’](https://atos.net/en/) Jez Wain says, “it’s not your grand-dad’s digital camera.” + +Attendees at [OpenPOWER Summit North America](https://events.linuxfoundation.org/events/openpower-summit-north-america-2019/) this year were treated to a session by Wain on how machine learning can be used to help classify stars. After all, there are 200 billion stars in the Milky Way, and the European Space Agency's Gaia project has only mapped about 1% of them. The stars now need to be classified in much the same way as we classify animals and plants. Atos is working with the University of Geneva to investigate the use of machine learning to help with this classification problem. Wain’s talk described the Gaia project before presenting the approach taken to construct and optimize a neural network capable of classifying the different star types. + +Watch the full session below. + + diff --git a/content/blog/openpower-summit-north-america-2019-fpgas-in-the-datacenter.md b/content/blog/openpower-summit-north-america-2019-fpgas-in-the-datacenter.md new file mode 100644 index 0000000..fcdbe21 --- /dev/null +++ b/content/blog/openpower-summit-north-america-2019-fpgas-in-the-datacenter.md @@ -0,0 +1,30 @@ +--- +title: "OpenPOWER Summit North America 2019: FPGAs in the Datacenter" +date: "2019-10-04" +categories: + - "blogs" +tags: + - "openpower-summit" + - "openpower-foundation" + - "nimbix" + - "steve-hebert" + - "fpgas" + - "data-center" +--- + +By: Hugh Blemings, Executive Director, OpenPOWER Foundation + +![](images/Nimbix.png) + +FPGAs have been in the data center for a long time - so when we talk about them today, what we’re really discussing is the new way that FPGAs are being applied in computing. + +At OpenPOWER Summit North America in August, Nimbix CEO Steve Hebert dove deeper into the history of FPGAs, starting in the ‘80s and ‘90s to today, where we are seeing them at hyperscale. Building purpose-driven FPGA hardware for individual workloads - at hyperscale - allows us to process information even faster with the flexibility and reconfigurability FPGAs provide. This is key as the computing ecosystem leans even heavier into cloud and workloads such as machine learning and big data processing. + +Watch the entire session below. + + + +Interested in hearing more from Steve? Check out our interview with him on the impact of the open sourcing of the POWER ISA on the community and Nimbix specifically. + +

Hear @Nimbix CEO @stevemhebert's thoughts on @IBMPowerSystems' latest contributions to the #opensource community and how they're set to transform the industry. #OpenPOWERSummit

— OpenPOWER Foundation (@OpenPOWERorg) August 27, 2019
+ diff --git a/content/blog/openpower-summit-north-america-2019-introducing-the-microwatt-fpga-soft-cpu-core.md b/content/blog/openpower-summit-north-america-2019-introducing-the-microwatt-fpga-soft-cpu-core.md new file mode 100644 index 0000000..2d4f485 --- /dev/null +++ b/content/blog/openpower-summit-north-america-2019-introducing-the-microwatt-fpga-soft-cpu-core.md @@ -0,0 +1,37 @@ +--- +title: "OpenPOWER Summit North America 2019: Introducing the Microwatt FPGA Soft CPU Core" +date: "2019-10-09" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "xilinx" + - "openpower-foundation" + - "openpower-summit-north-america" + - "ibm-power-isa" + - "anton-blanchard" + - "microwatt" +--- + +By: Hugh Blemings, Executive Director, OpenPOWER Foundation + +![](images/Microwatt.png) + +The success of open source software has made the march toward open hardware that extends down to the chip level inevitable. With the [release of the IBM POWER ISA](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) at OpenPOWER Summit North America, we are one step closer to achieving that vision as an open technical commons. + +The number of inquiries that we have received since this announcement tells us we’re on the right track! Specifically, the Microwatt FPGA Soft CPU Core written in VHDL that was developed by [Anton Blanchard](https://www.linkedin.com/in/antonblanchard/?originalSubdomain=au) and his colleagues at IBM has all but stolen the show. + +While originally intended as a proof of concept, the core has garnered global interest from the open community - with intrepid early adopters contributing code to extend and improve it (check out some of the code on Github, [here](https://github.com/antonblanchard/microwatt)). + +So why the excitement? On the lowest level of the stack, Microwatt gives interested parties a way to play with custom instructions and changes to the CPU itself. As one open source developer put it to me “It’s just a _make_ away” + +Microwatt also gives developers the opportunity to try out a basic 64-bit POWER core on low cost FPGA hardware or even in a software simulation environment. + +Taken together these in turn open up the possibility of embedded and purpose-built accelerator applications based on POWER - developed with something like Microwatt, implemented on a high end FPGA, ASIC or custom silicon. That’s pretty cool. + +I mentioned in a passing [tweet](https://twitter.com/hughhalf/status/1179613610219171841) last week that I was lucky enough to be privy to discussions about where this could all lead. If you’re at [Open Source Summit](https://events19.linuxfoundation.org/events/open-source-summit-europe-2019/) this month, I encourage you to stay an extra day or two to check out our upcoming [OpenPOWER Summit Europe](https://events.linuxfoundation.org/events/openpower-summit-eu-2019/) - we are currently adding additional open hardware sessions, and I have a _hunch_ that there may be a pretty special announcement about Microwatt and other things on the open ISA too. + +Watch Anton’s entire session on Microwatt at the OpenPOWER Summit North America below. + + diff --git a/content/blog/openpower-summit-north-america-2019-opencapi-acceleration-framework-unleash-the-power-of-customized-accelerators.md b/content/blog/openpower-summit-north-america-2019-opencapi-acceleration-framework-unleash-the-power-of-customized-accelerators.md new file mode 100644 index 0000000..2c3158a --- /dev/null +++ b/content/blog/openpower-summit-north-america-2019-opencapi-acceleration-framework-unleash-the-power-of-customized-accelerators.md @@ -0,0 +1,28 @@ +--- +title: "OpenPOWER Summit North America 2019: Unleash the Power of Customized Accelerators" +date: "2019-09-25" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "fpga" + - "openpower-foundation" + - "openpower-summit-north-america" + - "opencapi-acceleration-framework" + - "oc-accel" +--- + +By: Hugh Blemings, Executive Director, OpenPOWER Foundation + +![](images/OpenCAPI.png) + +Porting functions to FPGA has never been so easy! + +At this year’s [OpenPOWER Summit North America](https://events.linuxfoundation.org/events/openpower-summit-north-america-2019/), IBM’s Yong Lu hosted a session on the OpenCAPI Acceleration Framework, abbreviated as OC-Accel. OC-Accel is a platform that enables programmers and computer engineers to quickly create FPGA-based accelerations. + +Developers can use OC-Accel to boost acceleration performance. The framework enables global memory sharing, open and easy developing, improved latency and increased throughput. (In fact, OC-Accel actually enables the best throughput performance in the world!) + +To learn more about the benefits of OC-Accel, watch the full session below. + + diff --git a/content/blog/openpower-summit-north-america-2019-openpower-solution-builder-community.md b/content/blog/openpower-summit-north-america-2019-openpower-solution-builder-community.md new file mode 100644 index 0000000..e9d4b4b --- /dev/null +++ b/content/blog/openpower-summit-north-america-2019-openpower-solution-builder-community.md @@ -0,0 +1,32 @@ +--- +title: "OpenPOWER Summit North America 2019: OpenPOWER Solution Builder Community" +date: "2019-10-17" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "openpower-solution-builder-community" + - "christopher-sullivan" + - "john-pace" +--- + +By: Hugh Blemings, Executive Director, OpenPOWER Foundation + +![](images/Solution-Builder.png) + +Do you design on and maintain POWER and OpenPOWER solution stacks? Are you looking for a community of POWER builders to exchange ideas with? The OpenPOWER Solution Builder Community might be just what you’re looking for. + +Two members of the Community, [Christopher Sullivan](https://www.linkedin.com/in/christopher-m-sullivan-446904/), an Assistant Director for Biocomputing at Oregon State University, and [John Pace](https://www.linkedin.com/in/john-pace-phd-20b87070), Senior Data Scientist for Mark III Systems, stopped by OpenPOWER Summit North America in August to share insight into the group’s functions. + +Aimed at those who design, implement, operate and maintain POWER and OpenPOWER solution stacks, the group is self-governing and provides a means for builders to share insights and innovation. The Community serves three main functions: + +1. A place to ask and answer questions, and provide pointers to relevant information +2. Exchanging of best practices on building hardware-software solution stacks and architectures +3. Collaborate on innovations of applications, libraries, methods and approaches + +Right now, the Community is focused on solution stacks around the IBM POWER9 AC922 integrated GPU server, but they have plans to expand to additional systems and workloads! + +Watch the entire session below. + + diff --git a/content/blog/openpower-summit-north-america-meet-speaker-j-lynn.md b/content/blog/openpower-summit-north-america-meet-speaker-j-lynn.md new file mode 100644 index 0000000..f434aea --- /dev/null +++ b/content/blog/openpower-summit-north-america-meet-speaker-j-lynn.md @@ -0,0 +1,43 @@ +--- +title: "OpenPOWER Summit North America: Meet Speaker J Lynn" +date: "2019-08-15" +categories: + - "blogs" +tags: + - "power" + - "openpower-summit" + - "openpower-foundation" + - "j-lynn" +--- + +This year’s [OpenPOWER Summit North America](https://events.linuxfoundation.org/events/openpower-summit-north-america-2019/) will be jam-packed with ground breaking announcements and technical presentations given by innovators from around the globe. We asked Scalable Systems Engineer and session speaker [J Lynn](https://twitter.com/justinrwlynn) about their thoughts on the future of technology and what audiences will learn by attending the sessions they will be leading at this year’s event. + +Learn more about J below! + +**Tell us about your day job and what you work on.** + +My title is Scalable Systems Engineer, and as you’d suspect from the name, my duties involve designing, implementing, and operating information processing systems and environments, at all scales. + +**Can you describe the sessions you’ll be leading at OpenPOWER Summit North America and a few key takeaways for the audience?** + +I'll be leading two sessions at this year’s OpenPOWER Summit: [Using an OpenPOWER Workstation for Everything, Every Day: A Guided Tour](https://openpowerna19.sched.com/event/SPZ7/using-an-openpower-workstation-for-everything-every-day-a-guided-tour-j-lynn?iframe=no&w=100%25&sidebar=yes&bg=no) and [Project Xevadyne / X: Building a High Performance, Open Source POWER Compatible Microarchitecture - Going from ISA to Source to FPGA to Foundry and Beyond](https://openpowerna19.sched.com/event/THCr/project-xevadyne-x-building-a-high-performance-open-source-power-compatible-microarchitecture-going-from-isa-to-source-to-fpga-to-foundry-and-beyond-j-lynn?iframe=no&w=100%25&sidebar=yes&bg=no). + +The first session I’ll be leading is a result of my work on the second, which stemmed from my desire to have a provably secure, owner-controlled computer system – the entirety of which could be understood at every level of complexity. I believe that the technologies underlying our everyday lives should be available and understandable by any motivated person. In a very real sense, computers have become extensions of each of our minds. Therefore, naturally, they should be just as trustworthy. That’s why I choose to contribute my time to free and open source software projects. + +With this in mind, I chose OpenPOWER systems as my primary computing platform. It is the most performant and best documented computing platform available. The existing software ecosystem is excellent and diverse, and every application I've needed has been or is currently being ported to the POWER ISA. Another reason why I chose OpenPOWER is, hands down, the community. The group of passionate, diverse people who work with these systems seem to understand the spirit of IBM's motto: “Think.” Overall, that's why I want to speak about what it's like to use the platform on a daily basis and show others what works, what doesn't, and how we can all work together to make it better. + +**What else are you looking forward to at this year’s OpenPOWER Summit? What do you think this year’s “can’t miss” sessions are?** + +For me, it's all about the Protected Execution Facility (PEF) and Ultravisor. That one function will drive a paradigm shift in how we use and trust cloud services, which, in turn, will drive a paradigm shift in how we exchange, use, and trust computers and computing services in general. + +I’d recommend checking out [Protected Execution Facility on POWER](https://openpowerna19.sched.com/event/SfRU/protected-execution-facility-on-power-guerney-hunt-ram-pai-michael-anderson-ibm?iframe=no&w=100%25&sidebar=yes&bg=no) and [Securing Containers Using Power Protected Execution Facility](https://openpowerna19.sched.com/event/SpCg/securing-containers-using-power-protected-execution-facility-harshal-patil-pradipta-banerjee-ibm?iframe=no&w=100%25&sidebar=yes&bg=no). + +**What’s a current technology or topic that is especially exciting to you, and where do you think that technology is headed in the next 5-10 years?** + +Like I mentioned, I’m incredibly interested in Protected Execution Facility (PEF) and Ultravisor. This new, secure platform on which we can build, as enabled by PEF and the Ultravisor design, will enable us to create a virtual hyperscale cloud provider as reliable as the whole internet across all jurisdictions, which can onboard new technologies at an ever accelerating pace. + +Computing will become a commodity traded freely, on a real-time global marketplace, without single source risk. That explicit trust and its associated risks will become redressable and priceable. + +In essence: portable, trustworthy, open, and evolving. Finally – and all of it driven by the unprecedented levels of open in OpenPOWER. + +Definitely keep an eye on this space. It's going to be gargantuan, and IBM is at the forefront of it with OpenPOWER and their latest implementation of POWER, the next revision of POWER9. diff --git a/content/blog/openpower-summit-showcases-altera-fpga-acceleration-technology.md b/content/blog/openpower-summit-showcases-altera-fpga-acceleration-technology.md new file mode 100644 index 0000000..39753e8 --- /dev/null +++ b/content/blog/openpower-summit-showcases-altera-fpga-acceleration-technology.md @@ -0,0 +1,48 @@ +--- +title: "OpenPOWER Summit Showcases Altera FPGA Acceleration Technology" +date: "2015-03-12" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +SAN JOSE, Calif., March 12, 2015 /PRNewswire/ -- Altera Corporation (Nasdaq: [ALTR](http://studio-5.financialcontent.com/prnews?Page=Quote&Ticker=ALTR "ALTR")) today announced its FPGA acceleration solutions are being prominently showcased throughout the OpenPOWER Summit 2015. Luminaries from industry and academia participating in the event are using the OpenPOWER Summit to **"_Rethink the Data Center_"** through panel discussions, presentations and demonstrations. Altera and its partners are showing attendees how FPGAs are enabling the development of highly efficient, highly differentiated data center acceleration solutions. The OpenPOWER Summit takes place at the San Jose Convention Center in San Jose, Calif., March 17-19. + +As an OpenPOWER Foundation member, Altera is collaborating with several partners to develop high-performance compute solutions that integrate IBM POWER® CPUs with Altera's FPGA-based acceleration technologies. Altera and its partners are leveraging the OpenPOWER Summit to reveal a wide range of FPGA-based OpenPOWER solutions, including FPGA-acceleration applications programmed using OpenCL. + +**Demonstrations:** + +- Altera is demonstrating its OpenPOWER CAPI Developer Kit with coherent shared memory between an IBM POWER8 CPU and an FPGA accelerator leveraging IBM's Coherent Accelerator Processor Interface (CAPI) and programmed using OpenCL. +- Algo-Logic Systems is demonstrating a CAPI-enabled order book. The solution processes level-3 market data, sorts orders, and transfers level-2 snapshots and BBO pricing to POWER8 shared memory using Stratix® V FPGAs on a Nallatech CORSA card. +- Algo-Logic Systems is also demonstrating a Key-Value Search (KVS) solution targeting large data centers. +- Nallatech is showcasing their CAPI Hardware Developer Kit, featuring an Altera Stratix® V FPGA, along with a PMC NVMExpress attached demo. + +**Presentations:** + +- Nick Finamore, Altera, and chair of the OpenPOWER accelerator workgroup, will provide an overview of accelerator opportunities for OpenPOWER. +- John Lockwood, Algo-Logic, is describing how financial service firms are using FPGAs to compute heterogeneously with Gateware Defined Networking (GDN) to build order books and trade with the lowest latency. +- Allan Cantel, Nallatech, is presenting on enabling coherent FPGA acceleration. +- Stephen Bates, PMC-Sierra, is showcasing an NVMe demo using Altera FPGAs and PMC's FLASHTec controllers. +- Jeff Cassidy, University of Toronto, is presenting a CAPI-attached Photodynamic Cancer Therapy planning with FullMonte on OpenPOWER accelerated with Altera FPGAs. + +**About the OpenPOWER Summit** + +The OpenPOWER Summit is hosted within the GPU Technology Conference (GTC) and includes attendees from the technology sector, developers, researchers and government agencies. The summit has a lineup of keynote speakers, technical workgroup updates and member presentations. The three-day event will kick off the morning of Tuesday, March 17 with an exhibitor pavilion where OpenPOWER members will display and demonstrate OpenPOWER-based products and projects. More information can be found at[https://openpowerfoundation.org/2015-summit/](https://openpowerfoundation.org/2015-summit/) + +**About Altera** + +Altera® programmable solutions enable designers of electronic systems to rapidly and cost effectively innovate, differentiate and win in their markets. Altera offers FPGA, SoC, and CPLD products, and complementary technologies, such as power management, to provide high-value solutions to customers worldwide. Visit [www.altera.com](http://www.altera.com/). + +ALTERA, ARRIA, CYCLONE, ENPIRION, MAX, MEGACORE, NIOS, QUARTUS and STRATIX words and logos are trademarks of Altera Corporation and registered in the U.S. Patent and Trademark Office and in other countries. All other words and logos identified as trademarks or service marks are the property of their respective holders as described at [www.altera.com/legal](https://www.altera.com/about/legal.html). + +**Editor Contact:** Steve Gabriel Altera Corporation (408) 544-6846 [newsroom@altera.com](mailto:newsroom@altera.com) + +Logo - [http://photos.prnewswire.com/prnh/20101012/SF78952LOGO](http://photos.prnewswire.com/prnh/20101012/SF78952LOGO) + +  + +SOURCE Altera Corporation + +RELATED LINKS [http://www.altera.com](http://www.altera.com/ "Link to http://www.altera.com") diff --git a/content/blog/openpower-the-best-combination-of-open-and-high-performance.md b/content/blog/openpower-the-best-combination-of-open-and-high-performance.md new file mode 100644 index 0000000..ba1ccb7 --- /dev/null +++ b/content/blog/openpower-the-best-combination-of-open-and-high-performance.md @@ -0,0 +1,62 @@ +--- +title: "OpenPOWER - The Best Combination of Open and High Performance" +date: "2019-11-07" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "nvidia" + - "mellanox" + - "xilinx" + - "wistron" + - "openpower-foundation" + - "red-hat" + - "inspur" + - "yadro" + - "power-isa" + - "microwatt" + - "raptor-computing" +--- + +By [Hugh Blemings](https://www.linkedin.com/in/hugh-blemings/), Executive Director, OpenPOWER Foundation + +At the OpenPOWER Foundation, creating a level of open hardware has always been one of our core values. In fact, the Foundation was created back in 2013 to encourage open innovation at a system level around POWER technologies. As of mid-2019, we have seen that innovation realised in our ecosystem in the form of hundreds of products spanning systems, accelerators, adapters, commercial and open source software. Then in August of this year, we, along with IBM, [took our commitment to being open even deeper into the stack](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) with the opening of the POWER instruction set architecture (ISA). + +During my opening remarks at the recent OpenPOWER Summit in Lyon, I sketched the linkage between two distinct extremes of hardware in the OpenPOWER Ecosystem and the impact this has on our ecosystem overall. + +At one extreme we have the world's fastest supercomputers - for example the [Summit](https://www.olcf.ornl.gov/summit/) and [Sierra](https://computing.llnl.gov/computers/sierra) systems in the US that draw on the expertise of over a dozen OpenPOWER Foundation members including IBM, Nvidia, Mellanox and Red Hat. + +At the other extreme is the release of the [Microwatt FPGA Soft CPU Core\[0\]](https://github.com/antonblanchard/microwatt). Implemented in VHDL and released under an open source license, this proof of concept core runs at about 100MHz and fits on an embedded board barely 2.5” by 1”. Performance is modest to be sure, but in this particular case, it’s not so much about the FLOPS as it is about flexibility. + +See, both Microwatt and Summit/Sierra _share a common Instruction Set Architecture_. Granted, Microwatt at present is confined to the subset of integer instructions, but the toolchain (compilers, linkers, loaders etc.) used to create software for it are identical and can generate code without modification for either run time environment\[1\].  Indeed, the developers of Microwatt used a recent version of Fedora to build the binaries to exercise the implementation during development. + +This commonality of instruction set architecture accomplishes several things. + +First, it provides groups interested in developing bespoke acceleration hardware that needs extensions to the ISA with a stable and mature environment in which to experiment, prototype and implement. You could put Microwatt core(s) into a large Xilinx FPGA alongside your specialised FPGA acceleration logic, add your custom instructions to the ISA\[2\] and do your proof of concept before migrating to an ASIC or Full Silicon solution. Along the way, you’ll be able to participate in the ISA Working Group to ensure the instructions you add will fit into the official ISA going forward. + +Secondly, a common ISA gives developers of embedded systems a mature 64 bit ISA and associated ecosystem to work with whether their target be FPGAs, ASICs or full custom silicon SoC. It is clear from conversations that we have had since the opening of the ISA was announced in August that there is an appetite for a high performance, open 64 bit ISA solution to complement the increasingly vibrant (and, in a good way, less complex) 32 bit Open ISAs that have captured so much mindshare. + +Last, but by no means least, this architecture enables an even lower cost entry point for individual developers looking to “tinker” with a 64 bit POWER platform - a great compliment to things like [Raptor Computing’s](https://www.raptorcs.com/) developer and mid range Blackbird and Talos II POWER9 based systems, or [Inspur](https://www.inspurpower.com/), [IBM](https://www-355.ibm.com/systems/power/openpower/), [Yadro](https://yadro.com/tatlin) and [Wistron’s](https://openpowerfoundation.org/wistron-introduces-new-concepts-and-demonstrates-mihawk-results-at-openpower-china-summit-2018/) high-end servers that power data centres and supercomputers around the globe. + +All from a common ISA (a very mature one at that). + +In her [keynote at the OpenPOWER Summit](https://www.youtube.com/watch?v=ufBtrGJVF6g&list=PLEqfbaomKgQoZf-PgLWIA_on6Cj25volf&index=31&t=2s) in Lyons last week, [Mendy Furmanek](https://www.linkedin.com/in/mendy-furmanek-640425/), President of the OpenPOWER Foundation, compared an ecosystem to the sea - when the tide rises, all the boats, large and small, rise up with it. Likewise a software or hardware ecosystem - as more ways to contribute arise, everyone benefits. The recent opening of the POWER ISA and the release of Microwatt provide two new ways to get involved in the OpenPOWER ecosystem. + +If you’re at [SC19 in Denver](https://sc19.supercomputing.org/) this month, drop by the OpenPOWER Booth (Booth 1494) or our stand in the [IBM Booth (Booth 1525)](https://www.ibm.com/it-infrastructure/resources/events/supercomputing) to chat about how you can do just that. Perhaps you are an individual contributor who would like to develop on OpenPOWER. Perhaps you are looking to develop a custom acceleration solution and want a robust and mature ISA to underpin your work, or perhaps you are looking to develop an indiginous microprocessor for hyperscale and HPC application and want to draw on a proven HPC-ready ISA. + +**OpenPOWER can help you get there.** + +We look forward to you joining us in this new chapter in OpenPOWER’s journey! + +P.S. Rumor has it that Microwatt will be ready to run a standard Linux distro in the not too distant future, stay tuned… ;) + +**FOOTNOTES** + +\[0\] To hear more about Microwatt, check out Anton Blanchard and Michael Neuling’s presentation at the EU Summit on our [Youtube channel](https://www.youtube.com/watch?v=qXUh7w_mfR0&list=PLEqfbaomKgQoZf-PgLWIA_on6Cj25volf&index=6&t=355s) + +\[1\] For the very technically inclined, Microwatt uses the standard PPC64LE toolchain with compiler flags set to prevent generation of floating point and vector instructions which, in its current form at least, microwatt doesn’t implement. + +As an example \- `CFLAGS = -Os -g -Wall -std=c99 -msoft-float -mno-string -mno-multiple -mno-vsx -mno-altivec -mlittle-endian -fno-stack-protector -mstrict-align -ffreestanding -fdata-sections -ffunction-sections` + +\[2\] There is even [a tutorial](https://www.talospace.com/2019/09/a-beginners-guide-to-hacking-microwatt.html), contributed by an open source community member, on how to add a simple instruction already available diff --git a/content/blog/openpower-tops-off-first-year-with-80-members-worldwide-and-12-systems-under-development.md b/content/blog/openpower-tops-off-first-year-with-80-members-worldwide-and-12-systems-under-development.md new file mode 100644 index 0000000..9225404 --- /dev/null +++ b/content/blog/openpower-tops-off-first-year-with-80-members-worldwide-and-12-systems-under-development.md @@ -0,0 +1,56 @@ +--- +title: "OpenPOWER Gains Momentum Heading into Second Year" +date: "2014-12-15" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +# Dozens of Products Introduced and Under Development, Six Work Groups Chartered, Rackspace and Others Expand Roster to 80 Members Worldwide + +PISCATAWAY, N.J., Dec. 16, 2014 /PRNewswire-USNewswire/ -- One year after its formation, the [OpenPOWER Foundation](http://www.openpowerfoundation.org/) today announced continued membership growth and increasing momentum in open server product design and development with dozens of products introduced and under development. The organization's members, now 80 strong worldwide and growing, are expected to continue this momentum with new systems, solutions and deployments planned for 2015. + +A full slate of development activities are planned through the OpenPOWER Foundation work groups as well as collaborative member projects. For example, currently 12 members are designing OpenPOWER systems, and several universities are conducting research with OpenPOWER based technologies.  These projects and other member work underway build upon a growing set of OpenPOWER compatible solutions introduced in the last quarter of 2014 including: + +- [Nallatech collaboration with Altera](http://www.nallatech.com/nallatech-collaborates-with-openpower-foundation-members-ibm-and-altera-to-launch-innovative-capi-fpga-accelerator-platform/) produced the OpenPOWER CAPI Developer Kit for IBM POWER8, announced November 10. +- [Tyan](http://www.tyan.com/newsroom_pressroom_detail.aspx?id=1648) launched the TYAN GN70-BP010, the world's first OpenPOWER customer reference system, announced on October 8. +- [NVIDIA and IBM](http://www-03.ibm.com/press/us/en/pressrelease/45006.wss) collaboration produced the IBM Power S824L server with GPU acceleration, announced on October 3. +- [Redis Labs, Altera, Canonical and IBM](http://www-03.ibm.com/press/us/en/pressrelease/45006.wss) collaboration produced the IBM Data Engine for NoSQL, a CAPI-enabled solution for NoSQL data stores, announced on October 3 + +"We're very excited with what we've accomplished over the past year, not just in terms of our expanding roster but as measured by our ability to tap into the unique talents each member brings to our growing community of innovators," said OpenPOWER Chairman Gordon MacKean. "We wanted to make OpenPOWER a community where industry leaders could easily leverage one another's capabilities and technology to address the performance bottlenecks of today's servers. Our progress towards that goal becomes more evident with each new solution made available by our members and our success is highlighted by the performance gains demonstrated by these different solutions." + +**OpenPOWER Charters Sixth Work Group** + +To support a wide range of technical development efforts, the OpenPOWER Foundation's Technical Steering Committee has chartered six work groups.  The latest one formed addresses interoperability, allowing different server component technologies to work together.  Named the 25G IO Interoperability Mode Work Group, this new work group focuses on physical interfaces -- the wiring that connects componentry -- and will provide members a forum to work out an interoperability mode for custom 25Gbps PHYs. + +The interoperability work group joins the previously established Systems Software, OpenServer Development Platform, Hardware Architecture, Compliance and Accelerator work groups. Additional work group charters are under development, including one that will address open source application software. + +**Newest Members Expand OpenPOWER Areas of Expertise** + +In twelve months, OpenPOWER has grown from 5 founders to 80 members worldwide. OpenPOWER's newest members join a diverse set of leaders from across the technology industry from cloud service providers and technology consumers to chip designers, hardware components, system vendors and firmware and software providers and beyond. + +Today, [Rackspace](http://www.rackspace.com/blog/openpower-opening-the-stack-all-the-way-down/) becomes the latest member to join OpenPOWER, bringing new expertise and perspective to OpenPOWER.  A leading managed cloud company, Rackspace is active in open server design and plays a leading role in the [Open Compute Project](http://www.opencompute.org/) and[OpenStack®](http://www.openstack.org/). + +"Rackspace has been working with OpenPOWER founders for more than 18 months, and we are excited to officially join the OpenPOWER Foundation," said Aaron Sullivan, senior director and distinguished engineer, infrastructure strategy at Rackspace. "OpenPOWER brings an increasingly open firmware stack, and deeper access to chips, memory, and storage than anywhere else. This is unprecedented, and it invites the open source community to participate at all layers.  It's our vision that OpenPOWER enables OpenStack and Open Compute developers to work all the way down the stack. Where Open Compute opened and revolutionized data center hardware and OpenStack opened up cloud software and infrastructure-as-a-service, OpenPOWER is doing the same for the last black boxes in our servers: chips, buses and firmware." + +The addition of Lawrence Livermore National Laboratory and Sandia National Laboratories along with the world renowned academic institutions Tsinghua University and the Indian Institute of Technology Bombay broadens the organization's span of expertise and implementation in the areas of research, applied science and academia. + +OpenPOWER also welcomes Avnet, a leading global technology distributor. As a provider of global channel distribution and integration services, Avnet can expose OpenPOWER compatible offerings to a broader range of clients worldwide in variety of industries. + +"While the open source movement has largely focused on innovations driven by software, we recognize that there is a tremendous opportunity to drive even more exciting technology breakthroughs by fostering open collaboration at all levels of design, including hardware development," said Tony Madden, Avnet global supplier business executive. "Collaboration in the OpenPOWER Foundation enables Avnet customers to drive more options to differentiate their next-generation server and storage systems." + +A [current list of members](https://openpowerfoundation.org/membership/current-members/) is available on the OpenPOWER website. + +**First OpenPOWER Summit Takes Place in March** + +Plans are underway for the first OpenPOWER Summit, a conference and exhibition that will be held March 17-19, 2015, at the San JoseConvention Center in California. The three-day event will feature a keynote from OpenPOWER Chairman Gordon MacKean, member presentations, and an OpenPOWER exhibitor pavilion where members will be demonstrating their latest advancements in OpenPOWER based applications, platforms and research while networking with industry peers. + +Registration and further details about the event are available at [www.openpowerfoundation.org/2015-summit](http://www.openpowerfoundation.org/2015-summit). + +**About the OpenPOWER Foundation** + +Founded in December 2013,the OpenPOWER Foundation is a global, independent technical membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large or warehouse scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. + +**Media Contact:** Kristin Bryson OpenPOWER Media Relations office: 914-766-4221, mobile: 203-241-9190 email: [kabryson@us.ibm.com](mailto:kabryson@us.ibm.com) diff --git a/content/blog/openpower-two-years-later.md b/content/blog/openpower-two-years-later.md new file mode 100644 index 0000000..69e2a44 --- /dev/null +++ b/content/blog/openpower-two-years-later.md @@ -0,0 +1,38 @@ +--- +title: "Opening the Flood Gates- OpenPOWER Two Years Later" +date: "2015-08-06" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "google" + - "nvidia" + - "mellanox" + - "tyan" + - "featured" + - "ecosystem" + - "hpc" +--- + +**By Brad McCredie, President, OpenPOWER Foundation** + +_The journey of a thousand miles begins with one step. – Lao Tzu, Chinese Philosopher_ + +Two years ago, on August 6, 2013, IBM, along with Google, Tyan, NVIDIA and Mellanox, came together to announce the creation of OpenPOWER with the goal of building a worldwide collaborative ecosystem based on IBM’s POWER architecture. This bold move reversed the ongoing trend of data center architectures becoming increasingly closed.  IBM built a broad partnership with technology providers and clients to build an open data center platform that would allow collaboration and foster innovation. Nobody knew exactly what would happen next, but we all knew that for better or worse we would be turning the world of IT infrastructure on its head. + +Until that moment, Internet-scale cloud providers and other compute-centric industries had been forced to use one-size-fits-all servers powered by commodity x86 processors. The market lacked choice, and the trend was moving towards a completely closed, one supplier architecture. That’s what we set out to disrupt, and with technology innovators and consumers alike now able to license and modify POWER technologies, anyone can design or purchase systems custom-tailored to their needs. This open licensing has helped to grow a vast ecosystem of developers, ISVs, hardware manufacturers, academic centers, and individuals all committed to advancing innovation around OpenPOWER. + +_[![OpenPOWER Member Segments](images/OPF-Members.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/08/OPF-Members.jpg)_The [secret sauce of OpenPOWER](https://openpowerfoundation.org/blogs/the-open-secret-behind-the-success-of-openpower/) lies in the open business model, which allows this ecosystem to continue to deliver innovation through collaboration. This has led to over a dozen new hardware solutions and we’re not stopping there. Today, there are [147 members](https://openpowerfoundation.org/membership/current-members/) across 22 countries in the OpenPOWER Foundation, and thousands of developers working on bringing new OpenPOWER-based innovations to market. There are now more than 1600 Linux applications running on POWER, including popular database and data analytics applications like Redis Labs, MariaDB, Hadoop, MongoDB, Zend and Apache Spark, as well as HPC applications like AMBER, GROMACS, NAMD, GAMESS, WRF, HYCOM, BLAST, BWA, Bowtie and SOAP. The collaboration that is taking place is staggering. We’ve got hundreds of collaborative projects and POCs underway across members and together with end-users. + +These solutions and applications are being built around the globe, as OpenPOWER members have established dozens of hands-on development centers around the world that provide tools and access to the latest OpenPOWER platforms. In addition, developers can now utilize OpenPOWER anywhere thanks to [SuperVessel](http://www-03.ibm.com/press/us/en/pressrelease/47082.wss), a free OpenPOWER-based cloud service designed to bring university students, business partners and developers into the growing ecosystem to create apps. + +In the cutting edge arena of HPC, OpenPOWER is emerging as a leader, as members introduce several supercomputing centers where researchers and developers can take advantage of GPU-acceleration on OpenPOWER-compatible systems and drive new technology development. These include the [POWER Acceleration and Design Center](https://www-03.ibm.com/press/us/en/pressrelease/47222.wss) in Montpellier, France, the [Jülich Supercomputing Center](http://www-03.ibm.com/press/us/en/pressrelease/45350.wss) in Germany, and state-of-the-art supercomputers at the [Oak Ridge and Lawrence Livermore National Laboratories](http://www-03.ibm.com/press/us/en/pressrelease/45387.wss) (US).  Validation of the open strategy has come in key wins with the US Dept of Energy CORAL project and a significant investment from the STFC in the UK. + +As Lao Tzu said, every journey begins with a single step, and we’ve been amazed at what we’ve discovered, created, and disrupted along ours. As with every journey, we are still working our way along, and we couldn’t be more excited about revealing what lays ahead for OpenPOWER and the revolution we’ve only just begun. + +Come along on our journey as we have many more exciting updates coming in the near future, so be sure to continue following OpenPOWER on [Twitter](https://twitter.com/OpenPOWERorg), [Facebook](https://www.facebook.com/openpower), and [LinkedIn](https://www.linkedin.com/grp/home?gid=7460635) for the latest updates. + +_About Brad McCredie_ + +_[![Untitled-2](images/Untitled-2.jpg)](https://openpowerfoundation.org/wp-content/uploads/2014/03/Untitled-2.jpg)Dr. Bradley McCredie is an IBM Fellow, Vice President of IBM Power Systems Development and President of the OpenPOWER Foundation. Brad first joined IBM focusing on packaging for IBM’s mainframe systems. He later took a position within the IBM Power Systems development organization and has since worked in a variety of development and executive roles for POWER-based systems. In his current role, he oversees the development and delivery of IBM Power Systems that incorporate the latest technology advancements to support clients' changing business needs._ diff --git a/content/blog/openpower-virtual-coffee-calls.md b/content/blog/openpower-virtual-coffee-calls.md new file mode 100644 index 0000000..624ab15 --- /dev/null +++ b/content/blog/openpower-virtual-coffee-calls.md @@ -0,0 +1,46 @@ +--- +title: "OpenPOWER “Virtual Coffee” Calls" +date: "2020-03-23" +categories: + - "blogs" +tags: + - "hugh-blemings" + - "open-source" + - "covid-19" + - "virtual-coffee" + - "teleconference" +--- + +As we deal with the serious implications of COVID-19, the team at the OpenPOWER Foundation got to thinking about how a technical organisation like ourselves can connect and support our community. + +So we hit upon the idea of running ongoing, informal weekly teleconference calls **(dial-in details below)** to help the open-source community stay in touch in a time of restricted travel options. + +[Hugh Blemings](https://www.linkedin.com/in/hugh-blemings/), formerly the OpenPOWER Foundation ED and now Board Advisor, will host these informal calls.  We'll begin with introductions, then we will encourage participants to share updates and, time permitting, have a Q&A at the end. In turn, participants who want to can each have a few minutes to give a summary of what they're working on with OpenPOWER or any other interesting open-source projects. The intended format is meant to be similar to agile "Standup Meetings," but more informal. Calls won't be recorded or minuted, and those calling in can present something as fun or as formal as they prefer. If you are interested in just listening in, we encourage you to join as well! + +These calls can also serve as an opportunity for you to ask for any help that you might need, getting support from the community. + +**These calls occur every week on Tuesday 2200h UTC** **for 30-45 minutes**. We originally had two times but the "UTC Morning" calls were lightly attended so we dropped back to one a week.  Hopefully this still works for the schedules of the majority of interested people across the globe. We'll keep the section below up to date about calls but you can safely assume they are _usually_ on! + +We'll have these calls whether there are 2, 20 or 200 folks on the line - though if it's the latter we'll need to get clever about passing the virtual microphone around. :) + +You're welcome to call in when you can and stay for as long as you like, but we do ask that you are respectful of your fellow attendees. If you’re interested in joining ways to do so are detailed below. + +Look forward to staying in touch! + +# **Call-in Details etc.** + +The calls will occur every week on Tuesdays 2200h UTC and will last 30-45 minutes. The first call will be **Thursday 26 March** and will continue until further notice, there is a public calendar you can follow for updates - [web page](https://calendar.google.com/calendar/embed?src=j7ncevllkdf5ov4rfdpo561n7g%40group.calendar.google.com&ctz=Australia%2FMelbourne) or [iCal format](https://calendar.google.com/calendar/ical/j7ncevllkdf5ov4rfdpo561n7g%40group.calendar.google.com/public/basic.ics) + +The call will be a Zoom teleconference - [https://us02web.zoom.us/j/89040781548](https://us02web.zoom.us/j/89040781548) (note the Zoom details changed for calls after May 25, 2020) + +There are Zoom clients/plugins for most popular desktop and phone operating systems as well as a native browser mode known to work on POWER and x86 versions of the Chrome browser. + +To start the web client directly from your browser, you may need to click the “click here to launch the meeting” link twice to get the required link to appear. + +You can also dial in directly from a regular phone by using the Meeting ID: 890 4078 1548 and consulting the list of local numbers here: [https://us02web.zoom.us/u/kbbijfwRms](https://us02web.zoom.us/u/kbbijfwRms) + +If for some reason we have to change any aspect of the call, we’ll update this page accordingly. + +_Updated: August 4 to reflect shift to one call a week on Tuesday evenings 2200 UTC_ + +_Previous updates: No Call 18 June 2200 UTC, 4 June to note a one off call cancellation and add the online calendar info; 29 May 2020 to reflect new Zoom callin information and Hugh's new role._ diff --git a/content/blog/openpowerchat-openpower-summit-2018.md b/content/blog/openpowerchat-openpower-summit-2018.md new file mode 100644 index 0000000..9e8b877 --- /dev/null +++ b/content/blog/openpowerchat-openpower-summit-2018.md @@ -0,0 +1,53 @@ +--- +title: "#OpenPOWERChat Provides a Sneak Peek of OpenPOWER Summit 2018" +date: "2018-03-06" +categories: + - "blogs" +tags: + - "openpower-summit" + - "twitter-chat" + - "robbie-williamson" + - "openpower-chat" +--- + +Robbie Williamson, Chair of the Board, OpenPOWER Foundation + +Last week was a special occasion: the OpenPOWER Foundation hosted its first Twitter Chat – moderated by yours truly and our Executive Director Hugh Blemings. We focused on the upcoming OpenPOWER Summit and the innovations and collaboration that OpenPOWER members have developed. + +In case you weren’t able to attend and participate, here is a recap of the conversation. Don’t forget to [register for OpenPOWER Summit 2018 here](https://openpowerfoundation.org/summit-2018-03-us/)! + +## Where are you from and what are you doing with POWER? + +https://twitter.com/Hulk\_Sm444sh/status/969304245827665920 + +https://twitter.com/hughhalf/status/969304259605889024 + +https://twitter.com/adi\_gangidi/status/969312647375130624 + +https://twitter.com/farbenstau/status/969307919090159617 + +## What are the most exciting opportunities for POWER in 2018? + +https://twitter.com/hughhalf/status/969306324755296256 + +https://twitter.com/hughhalf/status/969306504376369152 + +https://twitter.com/Hulk\_Sm444sh/status/969306572978606080 + +https://twitter.com/adi\_gangidi/status/969318124477665282 + +## What are you most looking forward to at OpenPOWER Summit 2018? + +https://twitter.com/Hulk\_Sm444sh/status/969308422121377792 + +https://twitter.com/hughhalf/status/969310907644784640 + +https://twitter.com/adi\_gangidi/status/969316730001555456 + +## What would you like to see at upcoming OpenPOWER Summits in Europe & China? + +https://twitter.com/Hulk\_Sm444sh/status/969311694370627584 + +https://twitter.com/hughhalf/status/969312636432035841 + +https://twitter.com/hughhalf/status/969312975705161728 diff --git a/content/blog/openpowerchat-twitter-adi-gangidi.md b/content/blog/openpowerchat-twitter-adi-gangidi.md new file mode 100644 index 0000000..f361f82 --- /dev/null +++ b/content/blog/openpowerchat-twitter-adi-gangidi.md @@ -0,0 +1,24 @@ +--- +title: "Join #OpenPOWERChat on Twitter with Adi Gangidi" +date: "2018-07-09" +categories: + - "blogs" +tags: + - "featured" +--- + +Hi OpenPOWER Foundation members, + +We’re excited to announce our next Twitter chat on Thursday, July 12 at 4:00 p.m. ET. + +Here are some of details you should know: + +- The chat will be hosted right on our [@OpenPOWERorg](https://twitter.com/openpowerorg?lang=en)Twitter page +- Adi Gangidi, Senior System Design Engineer, Rackspace will be our special guest host +- You can join the conversation using #[OpenPOWERchat](https://twitter.com/search?f=tweets&q=%23openpowerchat&src=typd) + +The conversation will begin at 4 p.m. ET, so please drop in and answer as many questions as you can. + +Our chat will focus on the [Google and Rackspace collaboration on Zaius / Barreleye G2](https://openpowerfoundation.org/blogs/barreleye-g2-zaius-motherboard-openpower-summit/) that was showcased at the OpenPOWER Summit 2018. + +We look forward to chatting with you on Twitter on July 12! diff --git a/content/blog/openpowerchat-zaius-barreleye-g2.md b/content/blog/openpowerchat-zaius-barreleye-g2.md new file mode 100644 index 0000000..7c364d1 --- /dev/null +++ b/content/blog/openpowerchat-zaius-barreleye-g2.md @@ -0,0 +1,64 @@ +--- +title: "#OpenPOWERChat Provides an update on Zaius + Barreleye G2" +date: "2018-07-24" +categories: + - "blogs" +tags: + - "featured" +--- + +Adi Gangidi, lead Rackspace engineer on the Zaius / Barreleye G2 project joined the OpenPOWER Foundation for a Twitter Chat to discuss the project’s technology, features, performance and more. + +Here’s a recap of the conversation. + +## When did Google and Rackspace begin collaborating on this project? + +https://twitter.com/adi\_gangidi/status/1017499833492525056 + +## Why was POWER9 / OpenPOWER chosen for the Rackspace and Google collaboration? + +https://twitter.com/adi\_gangidi/status/1017500640044609536 + +https://twitter.com/adi\_gangidi/status/1017500956127350789 + +https://twitter.com/adi\_gangidi/status/1017501658237100035 + +## What has the Zaius / Barreleye G2 development process involved? + +https://twitter.com/adi\_gangidi/status/1017502258211229696 + +https://twitter.com/adi\_gangidi/status/1017502513665363968 + +## What types of standard / new technology are included in the motherboard? + +https://twitter.com/adi\_gangidi/status/1017505769686740992 + +## Why should anyone consider Zaius / Barreleye G2 or OpenPOWER servers over other alternatives? + +https://twitter.com/adi\_gangidi/status/1017506221773946880 + +https://twitter.com/adi\_gangidi/status/1017506569225998336 + +## How is the OpenCAPI ecosystem coming along? How can customers on the OpenPOWER platform take advantage of OpenCAPI? + +https://twitter.com/adi\_gangidi/status/1017507096919724034 + +https://twitter.com/adi\_gangidi/status/1017507739814285312 + +https://twitter.com/adi\_gangidi/status/1017508149400670209 + +## How about the PCIe Gen4 ecosystem? How can consumers take advantage of it? + +https://twitter.com/adi\_gangidi/status/1017508561063161857 + +https://twitter.com/adi\_gangidi/status/1017508921844649984 + +## Why should someone consider this OpenPOWER platform for AI workloads? + +https://twitter.com/adi\_gangidi/status/1017509547030843392 + +https://twitter.com/adi\_gangidi/status/1017509669332496384 + +## How can the broader industry and other OpenPOWER members benefit from the work Rackspace is doing on this project? + +https://twitter.com/adi\_gangidi/status/1017510607585767424 diff --git a/content/blog/opftechnical-initiatives.md b/content/blog/opftechnical-initiatives.md new file mode 100644 index 0000000..8d0c358 --- /dev/null +++ b/content/blog/opftechnical-initiatives.md @@ -0,0 +1,26 @@ +--- +title: "OpenPOWER Foundation Technical Initiatives" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Abstract + +As the Chair of the OpenPOWER Technical Steering Committee Mr. Brown will be discussing the technical agenda of the OpenPOWER Foundation and the structure of foundation workgroups.  He will describe the scope and objectives of key workgroups as well as their relationships to each other.  A roadmap of workgroup activities will illustrate when the community can expect key results.  The presentation will also cover three of the key initiatives within the OpenPOWER Foundation.  These initiatives involve work recently started to enable active foundation member engagement in workgroups focused on application solution sets IO device enablement, and compliance.  Mr. Brown will be joined by Randall Ross of Canonical who will cover application solution sets,  Rakesh Sharma of IBM who will cover broader IO device enablement, and Sandy Woodword of IBM who is the chair of the compliance workgroup .  Please join us for this in depth look at the OpenPOWER Foundation's technical activities and how we will enable ecosystem members to deliver solutions. + +### Bios + +**Jeffrey D. Brown**, IBM Server and Technology Group.  Jeff is an IBM Distinguished Engineer and member of the IBM Academy of Technology.  He received a B.S. in Electrical Engineering and a B.S. in Physics from Washington State University in 1981.  He received his M.S. degree in Electrical Engineering from Washington State University in 1982.  Jeff has over 25 years of experience in VLSI development including processor, memory, and IO subsystem development projects for IBM multi-processor systems and servers.  He is the coauthor of more than 40 patent filings.  He has been the Chief Engineer on several processor and SOC chip development programs including Waternoose for the XBOX360 and Power Edge of Network.  Jeff is currently actively involved in the OpenPOWER Foundation and chairs the Technical Steering Committee. + +**Sandra Woodward** is the OpenPOWER Foundation Compliance Work Group Chair. She received her B.S. in Electrical Engineering from University of Nebraska Lincoln and her M.S. degree in Electrical Engineering from Syracuse University.  Sandy has over 20 years of  experience with the POWER architecture.  She is a Senior Technical Staff Member at IBM, is an IBM Academy of Technology Member, and a member of the Society of Women Engineers and Women in Technology International. + +**Rakesh Sharma** is IBM POWER Systems I/O Chief Engineer and is focused on OpenPOWER I/O.  He chairs OpenPOWER I/O Workgroup Chartering Committee. He received his Bachelors in Electrical Engineering from IIT-Kanpur, India and Masters in Computer Science from North Dakota State University, Fargo. Rakesh has over 20 years of experience with the POWER architecture specializing in I/O, Virtualization and Networking. He is a Senior Technical Staff Member and is an IBM Master Inventor. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/JDBrown_OPFS2015_030615.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/oregon-state-power9-resources.md b/content/blog/oregon-state-power9-resources.md new file mode 100644 index 0000000..0e84873 --- /dev/null +++ b/content/blog/oregon-state-power9-resources.md @@ -0,0 +1,34 @@ +--- +title: "Oregon State University Provides Power9 GPU Resources" +date: "2018-10-19" +categories: + - "blogs" +tags: + - "featured" +--- + +By: Chris Sullivan, assistant director for biocomputing, Oregon State University Center for Genome Research and Biocomputing + +The Oregon State University Open Source Lab (OSUOSL) and Center for Genome Research and Biocomputing (CGRB) are excited to now provide access to POWER9 _AC922 Newell_ Systems (8335-GTG). + +The AC922 is the newest in the IBM set of AI-based servers used by many of the Oregon State research groups to overcome limits when processing large data sets. To ensure developers can take full advantage of these exciting new machines, we are allowing free access to several of these AC922 setups. We believe these new machines significantly change the way we can address limits in scope and remove bias in the work we currently do. The only limit we see is having access to all the great open source tools available on other platforms -  providing developers with access can help overcome that problem. + +The systems accessible to developers are set up with two processor sockets, offering 20-core (with 160 thread) at 3.0 GHz, four Tesla V100 with NVLink GPUs, 1TB of system memory, two 1.6TB CAPI-enabled NVMe SSD Controller and 40G network cards. These are the standard setups we look at for processing data as the high thread count on the CPU side allows us to process quickly along with the ability to do massive deep-learning and AI processing. + +## **Using GPU’s to Classify Oceans of Data** + +For example, we currently take video from various locations in the ocean and process that data to identify all plankton to help [manage ocean health](https://developer.ibm.com/linuxonpower/2018/09/10/using-gpus-classify-oceans-data/). These AC922 machines are able to do all the video processing using FFMPEG with threading on the CPU side, generate images and then directly send the data to the GPUs with NVLink to process the images using a Convolutional Neural Network (CNN) to identify the plankton. + +This is only one example where we can treat this machine as a cluster in a box and do all the work starting with video files and ending with CSV output with counts. We have found that the higher the threading the better the return when using the Power9 (as well as the Power8) processors. + +Below is a list of processors we have available to test and some quick numbers showing the benefits of threading on these machines. + +
 EPYC 7601 32-Core 64 threadXeon E5-2620 8 core 16 threadPOWER9 20 core 40 thread
 1200MHz3400MHz2016MHz
 secondss * MHzsecondss * MHzsecondss * MHz
Fibonacci76.443591732.200053.8354183040.360047.750796265.4112
Pi154.2242185069.0400105.5235358779.9000129.1436260353.4976
Float math41.204449445.280034.5253117386.020047.713796190.8192
Factorize 1 process69.070982885.080058.8655200142.700071.8679144885.6864
Factorize 2 process71.922086306.400048.7508165752.720052.2643105364.8288
Factorize 8 process22.235426682.480018.267362108.820015.235730715.1712
Factorize 16 process16.445719734.840015.100051340.000011.318622818.2976
Factorize 32 process23.959228751.040023.747580741.500011.956524104.3040
Factorize 36 process24.295529154.600025.796587708.100011.699023585.1840
+ +**Table 1:** Processing time for different calculations showing the lower times for Power9 machines. The big return on this hardware is the threading and this table shows over 2 times faster times on Power9 as we increase threads. Many groups have achieved an order of 4 times greater return when running against the most current x86-based machines.   + +The CGRB is focused on working with processor companies that are changing the threading on CPUs and bringing GPUs into play, like IBM and the new AC922. Right now for workloads that take months to complete on x86 boxes we are working with developers to move tools to Power9 so we can take advantage of these returns. Because the value around these machines is centered on threading and AI, we invite developers to come and get free access to a few Power9 and other Power8 machines to port tools and optimize performance. + +To get access, simply sign up for an account at the link below and we will get back to you.**OSUOSL GPU Access:** [https://osuosl.org/services/powerdev/request\_gpu/](https://osuosl.org/services/powerdev/request_gpu/)** + +AC922 Hardware:** [https://www.ibm.com/us-en/marketplace/power-systems-ac922](https://www.ibm.com/us-en/marketplace/power-systems-ac922) diff --git a/content/blog/parallelware-technology-eases-hpc-software-development-for-power-systems-featuring-openpower-member-appentra.md b/content/blog/parallelware-technology-eases-hpc-software-development-for-power-systems-featuring-openpower-member-appentra.md new file mode 100644 index 0000000..2d68d47 --- /dev/null +++ b/content/blog/parallelware-technology-eases-hpc-software-development-for-power-systems-featuring-openpower-member-appentra.md @@ -0,0 +1,36 @@ +--- +title: "How Parallelware Technology Eases HPC Software Development for POWER Systems" +date: "2019-01-22" +categories: + - "blogs" +tags: + - "featured" +--- + +_Featuring OpenPOWER member: [Appentra](https://www.appentra.com/)_ + + By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM + +The 3rd OpenPOWER Academic Discussion Group Workshop was a great meeting of more than 40 developers, researchers and partners all working on Power. I’ve already summarized two sessions led by speakers from Oak Ridge National Laboratory – [Early Application Experiences on Summit](https://openpowerfoundation.org/blogs/early-application-experiences-summit-oak-ridge/) and [Targeting GPUs using OpenMP Directives on Summit](https://openpowerfoundation.org/blogs/targeting-gpus-using-openmp-directives/). + +[Manuel Arenaz](https://www.linkedin.com/in/manuelarenaz/), CEO and co-founder of [Appentra](https://www.appentra.com/), led a session designed to answer an important question: is there a need for parallelware tools on POWER systems? According to Arenaz, there is of course incredible computational power in even a single node of a Power-based supercomputer like [Summit](https://www.olcf.ornl.gov/summit/). But there are also a number of parallel programming challenges: + +- Parallel programming of many-core processors +- Parallel programming of multiple GPUs +- Data movement through a heterogeneous complex memory hierarchy +- Training of computational researchers and engineers +- Porting of existing codes to pre-exascale systems + +Appentra’s efforts to make code parallel and help developers make the most of high performance computing resources can help solve these challenges. [Parallelware Trainer](https://www.appentra.com/products/parallelware-trainer/) is an interactive tool that acts as a virtual mentor to provide faster, more effective learning. And [Parallelware Analyzer](https://www.appentra.com/products/parallelware-analyzer/) (still in beta) is a command-line reporting tool to improve productivity of HPC application developers. + +Appentra plans to certify both Parallelware tools as [OpenPOWER Ready](https://openpowerfoundation.org/technical/openpower-ready/) in 2019. + +For more detail on the product roadmap of Parallelware Trainer and Parallelware Analyzer, view Arenaz’ full session video and slides below. + +https://www.youtube.com/watch?v=6unHYjQruEg + +  + + + +**[How Parallelware technology eases HPC software development for POWER systems](//www.slideshare.net/ganesannarayanasamy/how-parallelware-technology-eases-hpc-software-development-for-power-systems "How Parallelware technology eases HPC software development for POWER systems")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)** diff --git a/content/blog/performance-evaluation-methodology-to-the-openpower-user-community-to-evaluate-the-performance-using-the-advanced-instrumentation-capabilities-available-in-the-power-8-microprocessor.md b/content/blog/performance-evaluation-methodology-to-the-openpower-user-community-to-evaluate-the-performance-using-the-advanced-instrumentation-capabilities-available-in-the-power-8-microprocessor.md new file mode 100644 index 0000000..17f1e83 --- /dev/null +++ b/content/blog/performance-evaluation-methodology-to-the-openpower-user-community-to-evaluate-the-performance-using-the-advanced-instrumentation-capabilities-available-in-the-power-8-microprocessor.md @@ -0,0 +1,32 @@ +--- +title: "Performance evaluation methodology to the OpenPOWER user community to evaluate the performance using the advanced instrumentation capabilities available in the Power 8 Microprocessor" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Speaker’s Bio + +Satish Kumar Sadasivam is a Senior Performance Engineer and a Master Inventor at IBM STG responsible for Compiler and Hardware Performance analysis and optimization of IBM Power Processors and Compilers. He has 9+ years of experience in the area of Computer Architecture covering wide range of domains including Performance Analysis, Compiler Optimization, HPC, Competitive Analysis and Processor Validation. Currently he is responsible for delivering Performance Leadership for Power 8 Processor for emerging workloads. He also evaluates Competitors (Intel) Microrarchitecture design in detail and provide feedback to Power 9 Hardware design to address the next generation computing needs. He has filed more than 15 patents and achieved his 5th Invention Plateau and has several publications to his credit. + +### Organization + +IBM Systems and Technology Group + +### Presentation Objective + +The primary objective of this presentation is to provide a performance evaluation methodology to the OpenPower user community to evaluate the performance using the advanced instrumentation capabilities available in the Power 8 Microprocessor. And also to present a case study on how CPI stack cycle accounting model can be effectively used to evaluate the performance of SPEC 2006 workloads in various SMT modes. + +### Abstract + +This presentation has been split into two sections. In the first section of the presentation we will primarily cover the key Performance Instrumentation capabilities of the Power 8 Microprocessor and how it can be effectively utilized to understand and resolve the performance bottlenecks in the Code. This will cover in detail the CPI stack cycle accounting model of Power 8 microprocessor and how this is different from the previous Power 7 processor architecture. The improvements which went into the POWER 8 CPI Stack cycle accounting which help CPI cycle accounting very precise. + +In the second section of the presentation we will cover the Single Core SMT performance analysis of the SPEC 2006 workloads on the POWER 8 microprocessor. We will also discuss a performance evaluation methodology used to evaluate the performance of SMT. We will describe in detail how the CPI stack building for various SMT levels will help us root cause the key performance bottlenecks in the code at the higher SMT levels and how this can be attributed effectively to the different units of the microprocessor. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Sadasivam-Satish_OPFS2015_IBM_031615_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/pgi-compilers-for-openpower-platforms-which-will-enable-seamless-migration-of-multi-core-and-gpu-enabled-hpc-applications-from-linuxx86-to-openpower.md b/content/blog/pgi-compilers-for-openpower-platforms-which-will-enable-seamless-migration-of-multi-core-and-gpu-enabled-hpc-applications-from-linuxx86-to-openpower.md new file mode 100644 index 0000000..847505d --- /dev/null +++ b/content/blog/pgi-compilers-for-openpower-platforms-which-will-enable-seamless-migration-of-multi-core-and-gpu-enabled-hpc-applications-from-linuxx86-to-openpower.md @@ -0,0 +1,26 @@ +--- +title: "PGI compilers for OpenPOWER platforms, which will enable seamless migration of multi-core and GPU-enabled HPC applications from Linux/x86 to OpenPOWER" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Presentation objective + +PGI Fortran, C and C++ compilers & tools are used on Linux/x86 processor-based systems at over 5000 high-performance computing (HPC) sites around the world.  They are distinguished by HPC-focused optimizations including automatic SIMD vectorization, and extensive support for parallel and accelerator programming models and languages including OpenMP, OpenACC and CUDA.  The objective of this talk is to give an overview of the forthcomingPGI compilers for OpenPOWER platforms, which will enable seamless migration of multi-core and GPU-enabled HPC applications from Linux/x86 to OpenPOWER and performance portability of HPC applications. + +### **Abstract** + +High-performance computing (HPC) systems are now built around a de facto node architecture with high-speed latency-optimized SIMD-capable CPUs coupled to massively parallel bandwidth-optimized Accelerators.  In recent years, as many as 90% of the Top 500 Computing systems relied entirely on x86 CPU-based systems.   OpenPOWER and the increasing success of Accelerator-based systems offer an alternative that promises unrivalled multi-core CPU performance and closer coupling of CPUs and GPUs through technologies like NVIDIA’s NVLink high-speed interconnect.  PGIFortran/C/C++ compilers, until now available exclusively on x86 CPU-based systems, are distinguished by a focus on HPC features and optimizations such as automatic SIMD vectorization and support for high-level parallel and GPU programming. This talk will give an overview of the forthcoming PGI compilers for POWER+GPU processor-based systems, including features for seamless migration of applications from Linux/x86 to Linux/POWER and performance portability across all mainstream HPC systems. + +### Speaker + +Doug Miles, director, PGI Compilers & Tools, since 2003.  Prior to joining PGI in 1993, Doug was an applications engineer at Cray Research Superservers and Floating Point Systems. He can be reached by e-mail at [douglas.miles@pgroup.com](mailto:douglas.miles@pgroup.com). + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Miles_OPFS2015_031815.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/pgi-compilers-gpu-enabled-hpc-nvlink.md b/content/blog/pgi-compilers-gpu-enabled-hpc-nvlink.md new file mode 100644 index 0000000..80e5af9 --- /dev/null +++ b/content/blog/pgi-compilers-gpu-enabled-hpc-nvlink.md @@ -0,0 +1,118 @@ +--- +title: "New PGI compilers enable seamless migration of GPU-enabled HPC applications from Linux/x86 to NVLink-enabled OpenPOWER+Tesla" +date: "2016-11-15" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Doug Miles, director of PGI compilers & tools, NVIDIA Corporation_ + +NVIDIA introduced the first production release of the PGI Fortran, C and C++ compilers with OpenACC targeting Linux/OpenPOWER and Tesla computing systems, including IBM’s OpenPOWER LC servers that combine POWER8 CPUs with NVIDIA NVLink interconnect technology and NVIDIA Tesla GPU accelerators. + +## **Simplifying Migration from Linux/x86 to Linux/OpenPOWER Processor-based Servers** + +PGI for OpenPOWER enables easy porting of PGI-compiled HPC applications from Linux/x86 to Linux/OpenPOWER, often through a simple re-compile, including support for OpenMP 3.1, OpenACC and CUDA Fortran parallel programming. A good example is the WRF weather research and forecasting model, which together with its various support packages is comprised of over 800,000 lines of mostly Fortran source code. The OpenMP version of WRF 3.8.1 can be compiled on either Linux/OpenPOWER or Linux/x86 using the new PGI 16.10 compilers with identical makefiles, compiler options, source code and open source support packages: + +![pgi1](images/pgi1.png) + +## **Use at Oak Ridge National Laboratory** + +The PGI compiler suite for OpenPOWER is among the available tools Oak Ridge National Laboratory will use to build and run large HPC applications on x86 CPUs, OpenPOWER CPUs and NVIDIA GPUs using the same source code base. + +“Porting HPC applications from one platform to another is a significant and challenging effort in the adoption of new hardware technologies,” said Tjerk Straatsma, Scientific Computing Group Leader at Oak Ridge National Laboratory. “Architectural and performance portability like this is critical to our application developers and users as we move from existing CPU-only and GPU-enabled applications on machines like Titan to DOE’s upcoming major systems including the Summit system we’re installing at ORNL.” The upcoming CORAL Summit system at ORNL will be based on POWER9 CPUs and NVIDIA Volta GPUs. + +## **OpenACC: The Easy On-ramp to GPU Computing** + +In addition to ease of porting between Linux/x86 and Linux/OpenPOWER platforms, the new PGI compilers support OpenACC directive-based GPU programming in Fortran, C and C++ for an easy on-ramp to GPU-computing with NVIDIA Tesla accelerators. As an example, consider this code fragment from the OpenACC version of the [CloverLeaf](https://github.com/UK-MAC/CloverLeaf_OpenACC/tree/6e641da68033cbbb6ca099efc0afd8b7520b601b) mini-app, originally developed by AWE in the UK: + +66 !$ACC DATA & + 67 !$ACC PRESENT(density0,energy0,pressure,viscosity,volume,xarea) & + 68 !$ACC PRESENT(xvel0,yarea,yvel0) & + 69 !$ACC PRESENT(density1,energy1) & + 70 !$ACC PRESENT(xvel1,yvel1) & + 71 !$ACC PRESENT(volume\_change) + 72 + 73   IF(predict)THEN + 74 + 75 !$ACC KERNELS + 76     DO k=y\_min,y\_max + 77       DO j=x\_min,x\_max + 78 + 79         left\_flux=  (xarea(j  ,k  )\*(xvel0(j  ,k  )+xvel0(j  ,k+1) & + 80                                     +xvel0(j  ,k  )+xvel0(j  ,k+1)))\*0.25\_8\*dt\*0.5 + 81         right\_flux= (xarea(j+1,k  )\*(xvel0(j+1,k  )+xvel0(j+1,k+1) & + 82                                     +xvel0(j+1,k  )+xvel0(j+1,k+1)))\*0.25\_8\*dt\*0.5 + 83         bottom\_flux=(yarea(j  ,k  )\*(yvel0(j  ,k  )+yvel0(j+1,k  ) & + 84                                     +yvel0(j  ,k  )+yvel0(j+1,k  )))\*0.25\_8\*dt\*0.5 + 85         top\_flux=   (yarea(j  ,k+1)\*(yvel0(j  ,k+1)+yvel0(j+1,k+1) & + 86                                     +yvel0(j  ,k+1)+yvel0(j+1,k+1)))\*0.25\_8\*dt\*0.5 + 87         total\_flux=right\_flux-left\_flux+top\_flux-bottom\_flux + 88 + 89         volume\_change(j,k)=volume(j,k)/(volume(j,k)+total\_flux) + 90 + 91         min\_cell\_volume=MIN(volume(j,k)+right\_flux-left\_flux+top\_flux-bottom\_flux & + 92                            ,volume(j,k)+right\_flux-left\_flux                      & + 93                            ,volume(j,k)+top\_flux-bottom\_flux) + 94 + 95         recip\_volume=1.0/volume(j,k) + 96 + 97         energy\_change=(pressure(j,k)/density0(j,k)+viscosity(j,k)/density0(j,k))\*   +            total\_flux\*recip\_volume + 98 + 99         energy1(j,k)=energy0(j,k)-energy\_chang + 10 + 101         density1(j,k)=density0(j,k)\*volume\_change(j,k + 10 + 103       ENDD + 104     ENDD + 105 !$ACC END KERNELS + 106 ... + +Compiling the code above targeting a Tesla GPU on a Linux/OpenPOWER IBM Minsky system with the PGI OpenACC Fortran compiler yields the following output from the compiler: + +% pgfortran -fast -ta=tesla -Minfo -c PdV\_kernel.f90 + pdv\_kernel: +      66, Generating present(density1(:,:),energy1(:,:), +          pressure(:,:),viscosity(:,:),volume\_change(:,:), +          xarea(:,:),xvel1(:,:),yarea(:,:),density0(:,:), +          energy0(:,:),xvel0(:,:),yvel1(:,:),yvel0(:,:),volume(:,:)) +      76, Loop is parallelizable +      77, Loop is parallelizable +          Accelerator kernel generated +          Generating Tesla code +          76, !$acc loop gang, vector(4) ! blockidx%y threadidx%y +          77, !$acc loop gang, vector(32) ! blockidx%x threadidx%x +       ... + +The compiler scans the code between the OpenACC KERNELS and END KERNELS directives, determines the loops are parallelizable, and parallelizes the code for execution on a Tesla GPU. + +The same code can be compiled for serial execution on any platform by any standard Fortran compiler, or with the PGI compiler on the IBM system the OpenACC directives can be processed to generate parallel code targeting the multicore OpenPOWER CPUs: + +% pgfortran -fast -ta=multicore -Minfo -c PdV\_kernel.f90 + pdv\_kernel: +      76, Loop is parallelizable +          Generating Multicore code +          76, !$acc loop gang +      77, Loop is parallelizable +          3 loop-carried redundant expressions removed with +               9 operations and 9 arrays +          Generated vector simd code for the loop +      ... + +Cloverleaf compiled for OpenACC parallel execution across all 20 OpenPOWER CPU cores of an IBM Minsky server runs in about **17 seconds**. The identical source code compiled for execution on one Tesla Pascal P100 GPU in the same system runs in about **4 seconds**, providing a **4x speed-up** over multicore CPU execution. + +## **NVLink: Tearing Down the Memory Wall Between CPUs and GPUs** + +In addition to ease of porting between Linux/x86 and Linux/OpenPOWER platforms, the new PGI compilers enable interoperability of OpenACC and NVIDIA’s CUDA 8.0 Unified Memory features for Pascal GPUs. Specifying the -ta=tesla:managed option to the PGI OpenACC compilers enables this feature, in which most types of allocatable data are placed in CUDA Unified Memory. Movement of these variables and data structures between CPU main memory and GPU device memory is then managed by the CUDA memory manager on a page-by-page basis, rather than by the programmer using OpenACC directives or the compiler runtime system. + +Programs developed in this mode can decrease initial development time substantially, as shown in a recent joint [webinar presented by NVIDIA and IBM](http://on-demand.gputechconf.com/gtc/2016/webinar/ibm-power-minsky-nvlink-webinar.mp4). The chart below shows the performance of the SPEC ACCEL 1.0 OpenACC benchmarks running on one Pascal-based Tesla P100 GPU when compiled using CUDA Unified Memory relative to the performance with user-directed and optimized data movement. On a Minsky system with NVLink between the POWER8 CPUs and Tesla P100 GPUs, the versions of the 15 SPEC ACCEL benchmarks compiled to use CUDA Unified Memory averages within 10% of the versions with user-directed data movement of all allocatable data: + +![pgi2](images/pgi2-1024x535.png) + +Three of the benchmarks (354.cg, 357.csp and 370.bt) use only static data, so the CUDA Unified Memory feature does not apply. The other 12 benchmarks all make substantial use of allocatable data. + +"Easier programming methodologies like OpenMP and OpenACC are critical for the widespread adoption of GPU-accelerated systems," said Sumit Gupta, Vice President of High Performance Computing & Data Analytics, IBM. "The new PGI compilers take advantage of the high-speed NVIDIA NVLink connection between the POWER8 CPU and the NVIDIA Tesla P100 GPU accelerators, along with the page migration engine, to make it much easier to accelerate and enhance performance of high performance computing and data analytics workloads.” + +PGI is demonstrating the PGI Accelerator compilers for OpenPOWER in booth 2131 at SC16 in Salt Lake City, Nov. 14–17. Additional information is available and the new PGI compilers are downloadable at [www.pgroup.com/openpower](http://www.pgroup.com/openpower). diff --git a/content/blog/pgi-openpowertesla-compilers-demoing-at-isc15.md b/content/blog/pgi-openpowertesla-compilers-demoing-at-isc15.md new file mode 100644 index 0000000..fc1d1f0 --- /dev/null +++ b/content/blog/pgi-openpowertesla-compilers-demoing-at-isc15.md @@ -0,0 +1,140 @@ +--- +title: "PGI OpenPOWER+Tesla Compilers Demo'ing at ISC'15" +date: "2015-07-13" +categories: + - "blogs" +tags: + - "featured" +--- + +By Patrick Brooks, PGI Product Marketing Manager + +Last November at Supercomputing 2014, [we announced](http://nvidianews.nvidia.com/news/pgi-high-performance-computing-compilers-coming-to-ibm-power-systems) that the PGI compilers for high-performance computing were coming to the OpenPOWER platform. These compilers will be used on the [U.S. Department of Energy CORAL systems being built by IBM](http://www-03.ibm.com/press/us/en/pressrelease/45387.wss), and will be generally available on OpenPOWER systems in 2016. PGI compilers on OpenPOWER offer a user interface, language features, programming models and optimizations identical to PGI on Linux/x86. Any HPC application you are currently running on x86+Tesla using PGI compilers will re-compile and run with few or no modifications on OpenPOWER+Tesla, making your applications portable to any HPC systems in the data center based on OpenPOWER or x86 CPUs, with or without attached NVIDIA GPU compute accelerators. [PGI presented on this in detail](https://openpowerfoundation.org/blogs/pgi-compilers-for-openpower-platforms-which-will-enable-seamless-migration-of-multi-core-and-gpu-enabled-hpc-applications-from-linuxx86-to-openpower/) at the OpenPOWER foundation summit in March. + +At ISC'15 in Frankfurt, Germany July 14-17, you can get a first peak at the PGI compilers for OpenPOWER at the PGI stand (#1051) in the ISC exhibition hall. An early version of the compilers is already in use at Oak Ridge National Laboratory (ORNL), one of the two DOE sites where the IBM-developed CORAL supercomputers will be installed. For the ISC demo, the PGI Accelerator C/C++ compilers are being shown running on a remote [IBM S824L OpenPOWER Linux server](http://www-03.ibm.com/systems/power/hardware/s824l/index.html) with an attached [NVIDIA Tesla K40 GPU](http://www.nvidia.com/object/tesla-servers.html). These are pre-production PGI compilers, but all GCC 4.9 compatibility features, [OpenACC 2.0](https://developer.nvidia.com/openacc) features and interoperability with CUDA Unified Memory are enabled. The system is running [Ubuntu Linux](https://openpowerfoundation.org/blogs/how-ubuntu-is-enabling-openpower-and-innovation-randall-ross-canonical/) and NVIDIA CUDA 7.0. + +These compilers are being developed for future generation, closely coupled IBM OpenPOWER CPUs and NVIDIA GPUs. To demonstrate the capabilities they already have, PGI is showing how its pgc++ compiler for OpenPOWER can build an OpenACC version of the well-known Lulesh Hydrodynamics Proxy application (mini-app). [Lulesh](https://codesign.llnl.gov/lulesh.php) was developed at the Lawrence Livermore National Laboratory (LLNL), which is the other site where the IBM-developed CORAL supercomputers will be installed. + +Like most mini-apps, Lulesh is a relatively small code of only a few thousand lines, so it can be built and executed pretty quickly. Within those few thousand lines of code, 45 OpenACC pragmas are sprinkled in to enable parallel execution. Any C++ compiler that doesn’t implement OpenACC extensions ignores the pragmas, but with an OpenACC-enbled compiler like the one from PGI, they enable parallelization and offloading of compute intensive loops for execution on the NVIDIA K40 GPU. Here's what a section of the Lulush code with OpenACC pragmas looks like: + +``` + +3267 Real_t *pHalfStep = Allocate(length) ; +3268 +3269 #pragma acc parallel loop +3270 for (Index_t i = 0 ; i < length ; ++i) { +3271 e_new[i] = e_old[i] - Real_t(0.5) * delvc[i] * (p_old[i] + q_old[i]) +3272 + Real_t(0.5) * work[i]; +3273 +3274 if (e_new[i] < emin ) { +3275 e_new[i] = emin ; +3276 } +3277 } +3278 +3279 CalcPressureForElems(pHalfStep, bvc, pbvc, e_new, compHalfStep, vnewc, +3280 pmin, p_cut, eosvmax, length, regElemList); +3281 +3282 #pragma acc parallel loop +3283 for (Index_t i = 0 ; i < length ; ++i) { +3284 Real_t vhalf = Real_t(1.) / (Real_t(1.) + compHalfStep[i]) ; +3285 +3286 if ( delvc[i] > Real_t(0.) ) { +3287 q_new[i] /* = qq_old[i] = ql_old[i] */ = Real_t(0.) ; +3288 } +3289 else { +3290 Real_t ssc = ( pbvc[i] * e_new[i] +3291 + vhalf * vhalf * bvc[i] * pHalfStep[i] ) / rho0 ; +3292 +3293 if ( ssc <= Real_t(.1111111e-36) ) { +3294 ssc = Real_t(.3333333e-18) ; +3295 } else { +3296 ssc = SQRT(ssc) ; +3297 } +3298 +3299 q_new[i] = (ssc*ql_old[i] + qq_old[i]) ; +3300 } +3301 +3302 e_new[i] = e_new[i] + Real_t(0.5) * delvc[i] +3303 * ( Real_t(3.0)*(p_old[i] + q_old[i]) +3304 - Real_t(4.0)*(pHalfStep[i] + q_new[i])) ; +3305 } +3306 +``` + +The PGI compilers have a nice feature where they report back to the user whether and how loops are parallelized, and give advice on how source code might be modified to enable more or better parallelization or optimization. When the above loops are compiled, the corresponding messages emitted by the compiler look as follows: + +``` + +3267, Accelerator kernel generated +3270, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */ +3267, Generating copyout(e_new[:length]) +``` + +Generating copyin(delvc\[:length\],p\_old\[:length\],q\_old\[:length\],e\_old\[:length\],work\[:length\]) + +Generating Tesla code + +``` + +3279, Accelerator kernel generated +3283, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */ +3279, Generating copyin(p_old[:length],q_old[:length],pHalfStep[:length],bvc[:length]) +``` + +Generating copy(e\_new\[:length\]) + +Generating copyin(pbvc\[:length\],qq\_old\[:length\],ql\_old\[:length\],delvc\[:length\],compHalfStep\[:length\]) + +Generating copy(q\_new\[:length\]) + +Generating Tesla code + +The compiler generates an executable that includes both OpenPOWER CPU code and GPU-optimized code for any loops marked with OpenACC pragmas. It is a single executable usable on any Linux/OpenPOWER system, but which will offload loops for acceleration on any such system that incorporates NVIDIA GPUs. You can see in the messages that the PGI compiler is generating copyin/copyout calls to a runtime library that moves data back and forth between CPU memory and GPU memory. However, in this case the code is compiled to take advantage of CUDA Unified Memory, and when the executable is run those calls are ignored and the system automatically moves data back and forth. When the lulesh executable is run on the IBM S824L system, the output looks as follows: + +tuleta1% make run150 + +./lulesh2.0 -s 150 -i 100 + +Running problem size 150^3 per domain until completion + +Num processors: 1 + +Total number of elements: 3375000 To run other sizes, use -s . + +To run a fixed number of iterations, use -i . + +To run a more or less balanced region set, use -b . + +To change the relative costs of regions, use -c . + +To print out progress, use -p + +To write an output file for VisIt, use -v + +See help (-h) for more options Run completed: + +Problem size = 150 + +MPI tasks = 1 + +Iteration count = 100 + +Final Origin Energy = 1.653340e+08 + +Testing Plane 0 of Energy Array on rank 0: + +MaxAbsDiff = 1.583248e-08 + +TotalAbsDiff = 7.488936e-08 + +MaxRelDiff = 8.368586e-14 + +... + +If you're at ISC this week, stop by to see the demo live and give us your feedback. We're working to add full support for Fortran and all of our remaining programming model features and optimizations, and plan to show another demo of these compilers at the conference this coming November in Austin, Texas. Soon thereafter in 2016, more and more HPC programmers will be able to port their existing PGI-compiled x86 and x86+Tesla applications to OpenPOWER+Tesla systems quickly and easily, with all the same PGI features and user interface across both platforms. + +We'll keep you posted on our progress. + +## About Pat Brooks + +Patrick Brooks has been a Product Marketing Manager at PGI since 2004. Previously, he held several positions in technology marketing including product and marketing management at Intel and Micron, account management at Regis McKenna and independent consultant. diff --git a/content/blog/physical-science-wg.md b/content/blog/physical-science-wg.md new file mode 100644 index 0000000..d573752 --- /dev/null +++ b/content/blog/physical-science-wg.md @@ -0,0 +1,34 @@ +--- +title: "New Physical Science Work Group Addresses Physics, Chemistry, and more with OpenPOWER" +date: "2016-10-26" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Andrea Bulgarelli, Chair, OpenPOWER Physical Science Work Group_ + +As the application of OpenPOWER technology expands, so too must the OpenPOWER Foundation continue to explore workloads demanded by the market that best leverage our technology. In pursuit of that, the OpenPOWER Foundation is pleased to announce the formation of the new Physical Science Work Group. + +The Physical Science Work Group is a persistent work group focused on establishing an OpenPOWER Foundation interface between their members and the Physical Science community. This Work Group aims at addressing the challenges of Physical Science projects by developing use cases, identifying requirements and extracting workflows. + +## Applying OpenPOWER to Physical Sciences + +We made the decision to create this work group to understand how the OpenPOWER ecosystem can help physical science projects. Today the scientific community (from Big Science projects to a single laboratory) is facing an enormous increase in data volume, rate and dimensionality from experiments, and computational science. + +There are two main projects that will be addressed by the work group: + +(1) Current and future Physical Science projects use cases, requirements, common workflows and reference solutions. Based on these requirements, identification of common workflows and possible reference solutions in collaboration with other OpenPOWER Foundation Workroups. + +(2) Scientific software frameworks and libraries. Identification of widely used software frameworks and libraries used in the Physical Science, the status of the porting to OpenPOWER solutions. + +Another important point is to focus hardware/software developer around physical science projects requirements that are not covered by current solutions. + +## An Open Approach + +Working around use cases, the WG allows the OpenPOWER Foundation to be a forum between scientists and technical solutions developers, and also between scientists of different fields and projects to share experience and solutions with each other. The participation to this work group is open also for non-OpenPOWER member, to help to open the discussion within the Physical Science community around the OpenPOWER technology. For the same reason contributions and feedback are not subject to any requirement of confidentiality. The deliverables and their reviews are public. This will help to collect feedback from all interested people, not only OpenPOWER Foundation members. + +## Learn more at OpenPOWER Summit Europe + +This Friday, at the OpenPOWER Foundation Summit Europe, I will explain what the Physical Science Work Group is, why it was important for the Foundation to start it, and what are some of the workloads/problems the work group will work to address. To learn more about the new work group and others that are exploring the potential and use of OpenPOWER technology, please visit [https://openpowerfoundation.org/technical/working-groups/](https://openpowerfoundation.org/technical/working-groups/). diff --git a/content/blog/pmc-joins-the-openpower-foundation-and-brings-expertise-on-strategic-io-projects.md b/content/blog/pmc-joins-the-openpower-foundation-and-brings-expertise-on-strategic-io-projects.md new file mode 100644 index 0000000..212b664 --- /dev/null +++ b/content/blog/pmc-joins-the-openpower-foundation-and-brings-expertise-on-strategic-io-projects.md @@ -0,0 +1,35 @@ +--- +title: "PMC Joins the OpenPOWER Foundation and Brings Expertise on Strategic I/O Projects" +date: "2015-03-10" +categories: + - "press-releases" + - "blogs" +--- + +PMC Collaborates with Industry Leaders on OpenPOWER Advanced Server, Networking, Storage and Acceleration Technology + +SUNNYVALE, Calif.--(BUSINESS WIRE)--Dec. 17, 2014-- PMC-Sierra, Inc. (PMC®) (Nasdaq:PMCS), the semiconductor and software solutions innovator transforming networks that connect, move and store big data, today announced the company has joined the [OpenPOWER Foundation](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fopenpowerfoundation.org%2F&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=OpenPOWER+Foundation&index=1&md5=fdd90409f48edbad1e5e4c83b21662ce), an open development community based on the POWER microprocessor architecture. PMC will work with IBM and other OpenPOWER Foundation members to develop server and storage solutions for next-generation data centers that integrate IBM POWER CPUs and PMC Serial Attached SCSI (SAS) and NVM Express™ (NVMe) products. + +PMC joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology, as well as industry-leading open source software aimed at delivering more choice, control and flexibility to developers of next-generation hyperscale and cloud data centers. The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform. + +PMC intends to sponsor a new I/O workgroup, along with IBM, Emulex, Qlogic and Mellanox. The company also joins the system software, hardware architecture, coherent accelerator architecture (CIAA) and open server development platform workgroups. + +“Participating in the I/O workgroup with IBM and integrating our products into the OpenPOWER platform ensures that our customers have access to the latest cloud and big data storage technology,” said Kurt Chan, vice president of storage technology and strategy for PMC’s Enterprise Storage Division. “As a market leader in SAS and NVMe controllers, working with the industry to define new I/O interfaces and being at the forefront of new developments enables PMC to deliver the most advanced products for open architectures.” + +“The development model of the OpenPOWER Foundation is based on collaboration and represents a new way of innovating around processor technology,” said Brad McCredie, OpenPOWER president and IBM fellow and vice president. “OpenPOWER Foundation members like PMC will be able to add their own innovations on top of the POWER processor technology to better serve their customers’ needs, as well as create new products to address new markets. PMC’s deep I/O expertise will benefit our collective efforts and further strengthen OpenPOWER’s growing ecosystem.” + +To learn more about OpenPOWER and to view the complete list of current members, go to [www.openpowerfoundation.org](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.openpowerfoundation.org&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=www.openpowerfoundation.org&index=2&md5=2b1c2dbdf12676d9993abeb9408da5d6). #OpenPOWER to join the conversation. + +About PMC’s Server, Storage System and Flash Solutions + +PMC is a leading provider of enterprise storage system solutions for networked and server storage applications, with a broad portfolio of Adaptec by PMC® RAID adapters and HBAs, Tachyon® SAS/SATA and Fibre Channel (FC) protocol controllers, RAID controllers, Flashtec™ PCIe flash controllers and NVRAM drives, and maxSAS™ expander and FC disk interconnect products. Together, these products provide end-to-end semiconductor and software solutions to the industry’s leading storage OEMs and ODMs and hyperscale data centers. For more information, visit [http://www.pmcs.com/storage](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.pmcs.com%2Fstorage&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=http%3A%2F%2Fwww.pmcs.com%2Fstorage&index=3&md5=47da0a93781a4ce0f81d646d55463c57). + +About PMC + +PMC (Nasdaq:PMCS) is the semiconductor and software solutions innovator transforming networks that connect, move and store big data. Building on a track record of technology leadership, the company is driving innovation across storage, optical and mobile networks. PMC’s highly integrated solutions increase performance and enable next-generation services to accelerate the network transformation. For more information, visit [www.pmcs.com](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.pmcs.com&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=www.pmcs.com&index=4&md5=27dfd1934175ca7d0356aaa05112362c). Follow PMC on[Facebook](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.facebook.com%2Fpages%2FPMC-Sierra-Inc%2F362056543901598&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=Facebook&index=5&md5=9cf3507e875ca77c9b05afa858c80e9a), [Twitter](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Ftwitter.com%2F%23%21%2Fpmcsierra&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=Twitter&index=6&md5=27378c762fca7e21314bd854b6b080b2), [LinkedIn](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.linkedin.com%2Fcompany%2F4583%3Ftrk%3Dtyah&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=LinkedIn&index=7&md5=e9cab01024ff4d6c2caef110b2482dd7) and [RSS](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Finvestor.pmc-sierra.com%2Fphoenix.zhtml%3Fc%3D74533%26p%3DrssSubscription%26t%3D%26id%3D%26&esheet=51005136&newsitemid=20141217005306&lan=en-US&anchor=RSS&index=8&md5=e0cd95fc78ad9d195466b2aeffa8ccf1). + +© Copyright PMC-Sierra, Inc. 2014. All rights reserved. PMC, PMC-SIERRA, ADAPTEC and Adaptec by PMC are registered trademarks of PMC-Sierra, Inc. in the United States and other countries, PMCS and Flashtec are trademarks of PMC-Sierra, Inc. PMC disclaims any ownership rights in other product and company names mentioned herein. PMC is the corporate brand of PMC-Sierra, Inc. + +Source: PMC-Sierra, Inc. + +PMC-Sierra, Inc. Kim Mason Communications Manager, PMC [+1 604-415-6239](tel:%2B1%20604-415-6239) [kim.mason@pmcs.com](mailto:kim.mason@pmcs.com) or US Editorial: Sarmishta Ramesh [+1 303-296-4423](tel:%2B1%20303-296-4423) [pmcogilvy@ogilvy.com](mailto:pmcogilvy@ogilvy.com) diff --git a/content/blog/porting-gpu-accelerated-applications-to-power8-systems.md b/content/blog/porting-gpu-accelerated-applications-to-power8-systems.md new file mode 100644 index 0000000..c1f7135 --- /dev/null +++ b/content/blog/porting-gpu-accelerated-applications-to-power8-systems.md @@ -0,0 +1,48 @@ +--- +title: "Porting GPU-Accelerated Applications to POWER8 Systems" +date: "2014-12-01" +categories: + - "blogs" +tags: + - "featured" +--- + +By Mark Harris + +With the US Department of Energy’s announcement of plans to base [two future flagship supercomputers on IBM POWER CPUs, NVIDIA GPUs, and NVIDIA NVLink](http://devblogs.nvidia.com/parallelforall/how-nvlink-will-enable-faster-easier-multi-gpu-computing/) interconnect, many developers are getting started building GPU-accelerated applications that run on IBM POWER processors. The good news is that porting existing applications to this platform is easy. In fact, smooth sailing is already being reported by software development leaders such as Erik Lindahl, Professor of Biophysics at the Science for Life Laboratory, Stockholm University & KTH, developer of the[GROMACS](http://www.gromacs.org/) molecular dynamics package: + +> The combination of POWER8 CPUs & NVIDIA Tesla accelerators is amazing. It is the highest performance we have ever seen in individual cores, and the close integration with accelerators is outstanding for heterogeneous parallelization. Thanks to the little endian chip and standard CUDA environment it took us less than 24 hours to port and accelerate GROMACS. + +The [NVIDIA CUDA Toolkit version 5.5 is now available with POWER support](https://developer.nvidia.com/cuda-downloads-power8), and all future CUDA Toolkits will support POWER, starting with CUDA 7 in 2015. The Tesla Accelerated Computing Platform enables multiple approaches to programming accelerated applications: [libraries](https://developer.nvidia.com/gpu-accelerated-libraries) (cuBLAS, cuFFT, Thrust, AmgX, cuDNN and many more), compiler directives ([OpenACC](http://openacc.org/)), and [programming languages](https://developer.nvidia.com/language-solutions)(CUDA C++, CUDA Fortran, Python). You can use any of these approaches on GPU-accelerated systems based on x86, ARM, and now POWER CPUs, giving developers and system builders a choice of technologies for development and deployment. + +![common_programming_approaches](images/common_programming_approaches.png) + +The GPU portions of your application code don’t need to change when porting to POWER, and for the most part, neither do the CPU portions. GPU-accelerated code will generally perform the same on a POWER+GPU system compared to a similarly configured x86+GPU system (assuming the same GPUs in both systems). + +Porting existing Linux applications to POWER8 Linux on Power (LoP) is simple and straightforward. The new POWER8 Little Endian (LE) mode makes application porting even easier by eliminating data conversion complications. Even so, when targeting a new CPU, it’s useful to know the tools available for achieving highest performance. By knowing a handful of useful compiler flags and directives, you can get performance improvements right out of the gate. The following flags and directives are specific to IBM’s xlc compiler. + +## Useful Compiler Options and Directives + +POWER8 is known for its low latency and its high-bandwidth memory and SMT8 capabilities (8 simultaneous hardware threads per core). The `-qarch` and `-qtune` flags come in handy for automatic exploitation of the POWER8 ISA. + +\-qarch\=pwr8 \-qtune\=pwr8 + +For SMT-aware tuning, you can use sub-options to the `–qtune` option to specify the exact SMT mode. The options are `balanced`, `st` (single thread), `smt2`, `smt4` or `smt8`. SMT-aware optimizations allow for locality transformation and instruction scheduling. + +In addition to SMT tuning, automatic data prefetching, automatic SIMDization and Higher-Order Transformations (HOT) on loops can be enabled using `-O3 –qhot`. For best out-of-the-box results, you can combine options. + +\-O3 –qhot –qarch\=pwr8 –qtune\=pwr8 + +The automatic SIMDization compiler flag guarantees limited use of control flow pointers. The loop directive `#pragma independent`, can be used to tell the compiler a loop has no loop-carried dependencies. Use either the `restrict` keyword or the `disjoint` pragma when possible to tell the compiler that references do not share the same physical storage. Expose stride-one access when you can to limit strided accesses. + +By adding these flags and directives to your bag of tricks, you can significantly improve your application performance out of the box. + +## Get Started Now + +![IBM Redbook on POWER8 optimization](images/IBM_Redbook_POWER8_cover.jpg)For more performance optimization and tuning techniques (e.g.: dynamic SMT selection, gcc specifics, etc.), please refer to Chapter 6 (Linux) in [“Performance Optimization and Tuning Techniques for IBM Processors, including IBM POWER8”](http://www.redbooks.ibm.com/abstracts/sg248171.html). + +[Visit this IBM PartnerWorld page](https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_com_sys-hardware-for-solution-development) for information about developer access to POWER systems for evaluation, developing, and porting. POWER+GPU system access is available upon request. + +Joining the [CUDA registered developer program](https://developer.nvidia.com/cuda-registered-developer-program) is your first step in establishing a working relationship with NVIDIA Engineering. Membership gives you access to the latest software releases and tools, notifications about special developer events and webinars, and access to report bugs and request new features. + +The OpenPOWER Foundation was founded in 2013 as an open technical membership organization that will enable data centers to rethink their approach to technology. Member companies are enabled to customize POWER CPU processors and system platforms for optimization and innovation for their business needs. These innovations include custom systems for large-scale data centers, workload acceleration with GPUs, FPGAs or advanced I/O, platform optimization for SW appliances, and advanced hardware technology exploitation. Visit [openpowerfoundation.org](https://openpowerfoundation.org/) to learn more. diff --git a/content/blog/porting-scientific-applications-to-openpower.md b/content/blog/porting-scientific-applications-to-openpower.md new file mode 100644 index 0000000..0d0c1f3 --- /dev/null +++ b/content/blog/porting-scientific-applications-to-openpower.md @@ -0,0 +1,30 @@ +--- +title: "Porting Scientific Applications to OpenPOWER" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Speaker and co-authors + +[Dirk Pleiter](https://www.linkedin.com/profile/view?id=316411112&authType=NAME_SEARCH&authToken=WPhM&locale=en_US&srchid=32272301421438109791&srchindex=1&srchtotal=1&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438109791%2CVSRPtargetId%3A316411112%2CVSRPcmpt%3Aprimary) (Jülich Supercomputing Centre) Andrew Adinets (JSC), Hans Böttiger (IBM), Paul Baumeister (JSC), Thorsten Hater (JSC), Uwe Fischer (IBM) + +### Abstract + +While over the past years significant experience for using GPUs with processors based on the x86 ISA has been obtained, GPU-accelerated systems with POWER processors have become available only very recently. In this talk we report on early experiences of porting selected scientific applications to GPU-accelerated POWER8 systems. We will explore basic performance features through micro-benchmarks, but our main focus will be on results for full applications or mini-applications. These have been selected such that hardware characteristics can be explored for applications with significantly different performance signatures. The application domains range from physics to life sciences and have in common that they are in need of supercomputing resources. Particular attention will be given to performance analysis capabilities of the system and the available software tools. We finally will report on a newly established POWER Acceleration and Design Center, which has the goal of providing support to scientists in using OpenPOWER technologies. + +### Speaker's bio + +Prof. Dr. Dirk Pleiter is research group leader at the Jülich Supercomputing Centre (JSC) and professor of theoretical physics at the University of Regensburg. At JSC he is leading the work on application oriented technology development. Currently he is principal investigator of the ExascaleInnovation Center, the NVIDIA Application Lab at Jülich as well as the newly established POWER Acceleration and Design Center. He has played a leading role in several projects for developing massively-parallel special purpose computers, including QPACE. + +### Speaker's organization + +Forschungszentrum Jülich – a member of the Helmholtz Association – is one of the largest research centres in Europe and member of the OpenPOWER Foundation since March 2014. It pursues cutting-edge interdisciplinary research addressing the challenges facing society in the fields of health, energy and the environment, and information technologies. Within the Forschungszentrum, the Jülich Supercomputing Centre (JSC) is one of the three national supercomputing centres in Germany as part of the Gauss Centre for Supercomputing (GCS). JSC operates supercomputers which are among the largest in Europe. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Dirk-Pleiter_OPFS2015_Juelich_031115_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/power-3-0.md b/content/blog/power-3-0.md new file mode 100644 index 0000000..8048db6 --- /dev/null +++ b/content/blog/power-3-0.md @@ -0,0 +1,37 @@ +--- +title: "Announcing a New Era of Openness with Power 3.0" +date: "2016-02-02" +categories: + - "blogs" +tags: + - "power" + - "featured" +--- + +_By Michael Gschwind, Chief Architect & Senior Manager, Power System Architecture, IBM_ + +I am excited to announce the availability of the next generation of the Power Architecture, ushering in a new era for systems. The new [Power Instruction Set Architecture 3.0](http://ibm.co/1SyPMlO) (Power ISA 3.0) marks the first generation of architecture developed and released since the creation of the OpenPOWER Foundation, building upon and sustaining the growth of the Foundation’s open ecosystem of collaborative innovation. + +![OpenPOWER_Summit2016_logo2](images/OpenPOWER_Summit2016_logo2-1024x370.jpg) + +The Power ISA 3.0 architecture reflects the values of our open ecosystem, enhancing the platform by continuing the evolution of the RISC ISA concepts pioneered by the Power Architecture to deliver high-performance scalable systems optimized around workload needs. The new architecture specification include enhancements such as: + +- Improved support for string and memory block operations with the vector string facility +- Expanded little-endian support +- Instruction fusion and PC-relative addressing in support of improved application portability +- Hardware garbage collection acceleration +- Enhanced in-memory database support +- Interrupt and system call enhancements +- Hardware support for the native Linux radix page table format + +These new updates mean that the most important operations of a broad range of workloads will benefit from targeted optimizations to accelerate them even as speedups from semiconductor technology improvements can no longer be taken for granted. + +Power ISA 3.0 supports the entire spectrum of application choices with a common architecture definition. This ensures the OpenPOWER ecosystem enjoys the same level of compatibility that IBM enterprise customers have enjoyed over the past three decades. Consequently, Power ISA 3.0 no longer has optional categories, or separate server and embedded ISA architecture options, as the new specification supports the entire range of implementations. This allows for simpler sharing of application and system software across the entire range of Power processor implementations, enabling software developers to more easily support a broader range of applications, and ensuring that OpenPOWER compliant applications truly support a “write once, run everywhere” application development model. + +In addition, the new Power ISA 3.0 enables architects to build on a solid base and protect today’s investments in Power-based software and solutions, by maintaining compatibility for applications developed in previous architecture generations.  Consequently, programs going back to the beginning of POWER remain compatible with the new Instruction Set Architecture defined by Power ISA 3.0. + +To learn more about the Power Instruction Set Architecture, read the full description at [http://ibm.co/1SyPMlO](http://ibm.co/1SyPMlO), or join me at the [OpenPOWER Summit 2016](https://openpowerfoundation.org/openpower-summit-2016/) to discuss this and other new developments from the OpenPOWER Foundation. + +* * * + +[![mkg](images/mkg.jpeg)](https://openpowerfoundation.org/wp-content/uploads/2016/02/mkg.jpeg)_Michael Gschwind is a Chief Architect for Power Systems and the Chief Engineer for Machine Learning and Deep Learning in IBM’s Systems Group.   He was also a Chief Architect responsible for creating the little-endian Power software environment which forms the foundation of the OpenPOWER ecosystem and the software environment for the Cell SPE, the first general purpose programmable accelerator. Dr. Gschwind is an IBM Master Inventor, a member of the IBM Academy and a Fellow of the IEEE._ diff --git a/content/blog/power-and-speed-maximizing-application-performance-on-ibm-power-systems-with-xl-cc-compiler.md b/content/blog/power-and-speed-maximizing-application-performance-on-ibm-power-systems-with-xl-cc-compiler.md new file mode 100644 index 0000000..4b775bd --- /dev/null +++ b/content/blog/power-and-speed-maximizing-application-performance-on-ibm-power-systems-with-xl-cc-compiler.md @@ -0,0 +1,30 @@ +--- +title: "Power and Speed: Maximizing Application Performance on IBM Power Systems with XL C/C++ Compiler" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Presentation Objective + +How to optimize your application to fully exploit the functionality of your POWER system. + +### Abstract + +This presentation will provide the latest news on IBM's compilers on Power. The major features to enhance portability such as improved standards compliance and gcc compiler source code and option compatibility will be presented. The presentation will also cover performance tuning and compiler optimization tips to maximize workload performance on IBM Power Systems including exploitation of the POWER8 processor and architecture. + +### Bio + +Yaoqing Gao is a Senior Technical Staff Member at IBM Canada Lab in the compiler development area. His major interests are compilation technology, optimization and performance tuning tools, parallel programming models and languages, and computer architecture. He has been doing research and development for IBM XL C/C++ and Fortran compiler products on IBM POWER, System z, CELL processors and Blue Gene.   He authored over 30 papers in journals and conferences.  He has been an IBM Master inventor since 2006 and authored over 30 issued and pending patents. + +### Organization + +IBM + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Gao-Yaoqing-Li-Kelvin_OPFS2015_IBM_030615_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/power8-the-first-openpower-processor.md b/content/blog/power8-the-first-openpower-processor.md new file mode 100644 index 0000000..c4d0040 --- /dev/null +++ b/content/blog/power8-the-first-openpower-processor.md @@ -0,0 +1,20 @@ +--- +title: "POWER8 -- the first OpenPOWER processor" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Abstract + +The POWER8 processor is the latest RISC (Reduced Instruction Set Computer) microprocessor from IBM and the first processor supporting the new OpenPOWER software environment.   Power8 was designed to deliver unprecedented performance for emerging workloads, such as Business Analytics and Big Data applications, Cloud computing and Scale out Datacenter workloads.  It is fabricated using IBM's 22-nm Silicon on Insulator (SOI) technology with layers of metal, and it has been designed to significantly improve both single-thread performance and single-core throughput over its predecessor, the POWER7i processor. The rate of increase in processor frequency enabled by new silicon technology advancements has decreased dramatically in recent generations,  as compared to the historic trend. This has caused many processor designs in the industry to show very little improvement in either single-thread or single-core performance, and, instead, larger numbers of cores are primarily pursued in each generation. Going against this industry trend, the POWER8 processor relies on a much improved core and nest microarchitecture to achieve approximately one-and-a-half times the single-thread performance and twice the single-core throughput of the POWER7 processor in several commercial applications. Combined with a 50% increase in the number of cores (from 8 in the POWER7 processor to 12 in the POWER8 processor), the result is a processor that leads the industry in performance for enterprise workloads. This talk will describe the architecture and microarchitecture innovations made in the POWER8 processor that resulted in these significant performance benefits for cloud applications, workload optimization features for stream processing, analytics and big data workloads, and support for organic workload growth.  Finally, this talk will introduce the CAPI accelerator interface that offers system architects a way to accelerate their workloads with custom accelerators seamlessly integrating with the Power system architecture. + +Michael Gschwind, PhD STSM & Manager, System Architecture, IBM Systems & Technology Group Fellow, IEEE - Member, IBM Academy of Technology - IBM Master Inventor + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Gschwind2_OPFS2015_IBM_031315_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/powerful-heterogeneous-computing-development-tool-brought-by-new-technology.md b/content/blog/powerful-heterogeneous-computing-development-tool-brought-by-new-technology.md new file mode 100644 index 0000000..45a2b92 --- /dev/null +++ b/content/blog/powerful-heterogeneous-computing-development-tool-brought-by-new-technology.md @@ -0,0 +1,102 @@ +--- +title: "Powerful Heterogeneous Computing Development Tool Brought by New Technology" +date: "2019-03-14" +categories: + - "blogs" +tags: + - "featured" +--- + +**_An Interview With 2018 OpenPOWER / CAPI+OpenCAPI Competition Winners_** + +By Yang Dai and Yong Lu, IBM + +The 2018 OpenPOWER / CAPI + OpenCAPI heterogeneous computing design contest has come to an end. The 10 short-listed Chinese university teams who passed the semi-finals stage took three months of actual development, testing and tuning, and successfully developed ten FPGA prototypes based on CAPI / OpenCAPI and the POWER9 platform. Their accomplishments demonstrated the powerful and innovative strength of the heterogenous computing structure built by the deep integration of the POWER processor and FPGA. + +This contest started in July 2018, with 27 teams from 17 universities enrolled. Ultimately, the prototype "WebP Image Compression Acceleration Design Based on OpenCAPI " developed by Fudan University Computing won first place. Other winning teams include: + +- "CAPI-based 1080p@30fps H.265/HEVC Heterogeneous Computing Video Encoder," developed by Fudan University, Fudan VIP team won second place +- “Accelerated advertising click rate prediction algorithm based on CAPI interface” developed by Shenzhen University Cai Group team won third place +- “Pulsed neural network accelerator design” developed by Zhejiang University ZJU\_SNN team won the special award + +“CAPI / OpenCAPI is a major innovation in the field of heterogeneous computing, and also a key technology of the OpenPOWER ecosystem. We are delighted to see that the teams explored innovative applications of CAPI/OpenCAPI technology in various areas, like video processing, encryption and decryption, artificial intelligence, deep learning, gene sequencing and so on. Teams also obtained a deeper understanding of the software and hardware based on OpenPOWER computational acceleration technology. + +We hope that in the future, in the field of heterogeneous computing, they can gradually grow into the backbone of technological innovation!” said by Hugh Blemings, the executive chairman of OpenPOWER Foundation. + +In the context of the prevalence of heterogeneous computing, mastering the CAPI tool makes the team members feel very excited. Let's see what these young students are saying. + +## **WebP Image Compression Acceleration Design Based on OpenCAPI** + +> "It's easy to use CAPI development to make software and hardware threads work together! " - Fudan University “Computing Team,” the first place winner of the competition said. + +When IBM instructors introduced CAPI to us, they introduced two technical features of CAPI: One is the Cache Coherence Interface. With this interface, the FPGA has the same memory access rights as the CPU, and the software and hardware interface programming development of the FPGA and CPU interaction is greatly reduced. It allows us to use the CPU memory without having to develop the driver as if it were accessing on-chip memory. + +Second is that CAPI-SNAP development framework supports high-level synthesis (HLS). It provides a convenient development environment for porting C program to CAPI heterogeneous platforms and shorten the development cycle. Specifically, engineers can save a lot of development time on hardware and software interactions, and do not need to consider the data storage problems on the host and FPGA. + +The two major features are very impressive to our team. The topic we chose is WebP Image Compression Acceleration Design. This used to be a pure software design, but we are considering how to enable it with the hardware acceleration on the FPGA. Following traditional ideas, we need to think much about the FPGA hardware co-scheduling, such as to create specialized storage space for data exchange. And FPGA programming is also a cumbersome thing. The set-up of CAPI inspired us. These two features seem to be particularly suitable for our needs, so we made a multi-thread design combining hardware and software, two software threads with a hardware thread. The function of software and hardware is divided per the diagram below: + +![](images/Capi-image-1-1024x569.png) + +The first thread (software thread) is responsible for preparing the data to be processed. It reads the data into the memory from the hard disk and preprocess. Once finished, it puts the data pointer into the global variable, and then directly returns without waiting for the end of the hardware operation and starts preparing for next data. The second thread (hardware thread) can be conveniently controlled by semaphores so that it is started after the data is ready, and the data to be processed is retrieved from the global variables. The third thread (software thread) similarly gets data to be processed through global variables and then releases the memory space. The three threads are independent of each other and execute asynchronously, which greatly improves the overall efficiency. + +![](images/Capi-image-2-1024x594.png) + +In this multi-threaded framework, although the FPGA is still called by the CPU, it is approximately uninterrupted doing data processing, which maximizes the efficiency of FPGA. Scheduling control between threads is implemented at the software level (on the host), and no modification is required at the hardware level. That is to say, the hardware is actually a component that responds to CPU scheduling and can be considered as a normal function, so it is equivalent to three software threads executing asynchronously. This convenience is achieved by the Cache consistency of CAPI. + +Looking at the entire design of the project, the use of CAPI is very natural, and make people feel pleasant about developing on CAPI. In the traditional case of FPGA heterogeneous computing, data scheduling design between FPGA and host usually take a lot of time, cannot be so smooth. + +## **CAPI-based 1080p@30fps H.265/HEVC Heterogeneous Computing Video Encoder** + +> “The CAPI-SNAP framework is very fast, and most of the time we can focus on our own function design” – Fudan University VIP team, the second place winner of the competition, said. + +We are a team from the Video Image Processing Laboratory of Fudan University. During the summer vacation last year, we heard about the CAPI Heterogeneous Computing Contest. After a deep understanding of CAPI technology and the SNAP framework, we noticed that CAPI can be easily combined with our existing H.265/HEVC video encoder in the lab and can help improve the performance of the video encoding solution, so we decided to participate in it. + +#### **CAPI experiences** + +At the beginning of October last year, we started the related work of the competition. In just two weeks, we completed functional simulation on CAPI. Such rapid development speed is beyond our imagination, mainly due to the ease of use of the CAPI-SNAP framework. + +During the development, we found that CAPI-SNAP gives developers considerable convenience both at the software level and at the hardware level. On the software side, we don't need to pay attention to how the underlying driver works. We just need to call the API provided by the SNAP library to interact with the PSL. On the hardware side, data exchange between the FPGA and the CPU can be easily carried out through shared memory. Moreover, the software and hardware co-simulation environment provided by SNAP is also convenient to use. In fact, we can spend most of our time focusing on perfecting our designs. At the CAPI system level, we only need a small amount of time to complete the environment construction, simulation and other work. + +#### **Advantages of CAPI** + +In the CAPI system, thanks to use of PSL, the FPGA and CPU can easily share the memory on the host. In the previous FPGA verification scheme, the memory sharing between the FPGA and the CPU was very complex, so we usually store the image data in the memory on the FPGA. Since the memory capacity on the FPGA is usually small and I/O performance is weak, it limits the performance of the IP deployed on the FPGA. Therefore, in this design, we store the original image pixel and the encoder intermediate data directly in the memory of the host. On the one hand, it improves the encoder’s reading and writing speed, and on the other hand greatly facilitates the data exchange between the FPGA and the CPU. + +## **Accelerated advertising click rate prediction algorithm based on CAPI interface** + +> "The interface between the FPGA and the host is simplified, and with a large bandwidth and low latency" - Shenzhen University Cai Da Group team, the third place winner of the competition, said. + +Talking about CAPI technology and products, the research group I am in has been working on this area for years. There are three Power8 servers in our lab. Before CAPI2.0 released, we already had 4 Alpha data KU3 CAPI1.0 FPGA cards. We can be called as the earliest batch of CAPI users. When I first came to the lab in 2017, the team members did not use CAPI SNAP on CAPI-based projects. At that time, they directly wrote the communication protocol between the user logic and the PSL layers. In this case, there were two problems. The first is that the logic is relatively complicated. In the development process, not only the custom logic but also the communication interface with the PSL layer must be considered. The second is bandwidth is still a little limited. + +The first time we used the CAPI-SNAP framework was in the 2018 OpenPOWER / CAPI + OpenCAPI Heterogeneous Computing Design Contest. The framework is easy to learn. IBM published a lot of documentation and technical information. After reading the case provided, I quickly mastered the data path and data control methods. + +The topic of our project in the competition is “Accelerated advertising click rate prediction algorithm based on CAPI interface.” The reason for choosing this topic is because advertising is one of the key profitable methods of many internet companies. In order to obtain a high rate of advertising returns, it is necessary to quickly analyze the popularity of a large number of users. According to the attributed characteristics of the user, the user's click probability of the advertisement is analyzed, and the preferred algorithm using deep learning is the DeepFM model. In the application scenario, the amount of user data is often very large. In a large amount of data transmission, it requires high transmission bandwidth and low latency. These problems can be well resolved based on the CAPI mechanism when actually programming with CAPI-SNAP. I think the advantages of CAPI are as follows: + +First, the CAPI-SNAP framework simplifies the data transmission interface between the host and the FPGA, so that users do not need to care about the underlying interface protocol. We are deeply touched by this advantage. Compared with the previous communication with the PSL layer, CAPI-SNAP provides a more complete data path. We only need to care about where the data is located and how much data is transmitted. This allows us to be more focus on hardware logic development, greatly improving work efficiency. + +Second, CAPI has a proper large bandwidth and low latency. In our project, large bandwidth reduces data transmission time, and low latency allows our algorithm model to meet the requirements of “high concurrency and low latency.” + +Third, CAPI-SNAP provides a completed tool chain, including code writing, simulation, synthesis, programming and other operations and all of them can be done on one platform. This makes the actual use simple and efficient. + +Certainly, as a newly released product, there are also some inadequacies that I hope will continuously be optimized. First, I hope that the SNAP framework can support different Vivado versions or can give hints when hit unsupported Vivado version, as I used to spend a long time to root cause a problem caused by Vivado version incompatibility. Second, I hope the simulation can be sped up. In our experiment, if the calculation is complicated, the simulation time is particularly long. Finally, I hope the CAPI-SNAP framework can be continuously maintained and enhanced and, in the future, it can support some algorithmic of artificial intelligence (such as CNN). It is also a cool thing to use CAPI for artificial intelligence research. + +## **Pulsed neural network accelerator design** + +> "CAPI makes it possible to dynamically load the network structure in the main memory. AXI interface and software and hardware co-simulation are very convenient" - Zhejiang University ZJU\_SNN team, the winner of the special prize of the competition, said. + +In this CAPI competition, the topic we chose was “CAPI-based FPGA pulse neural network accelerator design.” Through the design and development of this competition, we deeply feel, CAPI as an acceleration interface with consistency, its primary advantage is ease of use. According to our previous experience in heterogeneous computing development, the programming and debugging of CPU and accelerator based on the I/O interface is very difficult and requires more time and effort. Therefore, after IBM’s education, we learned that the CAPI acceleration interface has memory consistency, which can greatly reduce the difficulty of software designs, we decided to participate in this competition, and try CAPI to implement our design. + +Due to the consistency of the CAPI, the CPU can logically share with the accelerator the memory space dynamically allocated, therefore facilitating communication programming between them. From the software perspective, the CPU can use very simple APIs to read/write to accelerator. + +From the hardware side, CAPI-SNAP are an AXI interface and an AXI-Lite interface, so for those familiar with SOC design, there is no need to know a new interface protocol. In the design process, we can completely focus on the acceleration unit and the master CPU control process itself, without having to worry about the interface. + +The pulse neural network accelerator designed by us in this competition is calculated in units of time steps according to the control instructions of the main control CPU, and not all calculations are completed at one time. On one hand, this can make full use of the low-latency characteristics of the CAPI interface, and the master CPU can send commands to the accelerator to read some calculation results at the intermediate time, and then to dynamically adjust. On the other hand, at the same logical moment, the data of the network structure and state variables stored in the CPU main memory can be dynamically loaded onto the acceleration unit of the accelerator and calculated. This can take advantage of the high bandwidth of CAPI, occupying a small SRAM resource, and supporting a pulsed neural network of a scale much larger than this. However, due to time, the design hasn’t been fully implemented in this contest. However, the data bandwidth advantages of CAPI, and the design method we use to calculate in time steps, make it possible. + +Finally, we feel that CAPI's tool chain and development process are also very convenient and practical for designers, especially its support for hardware and software co-simulation. The co-simulation environment is built automatically based on the hardware design code, and it can automatically record some intermediate data of the simulation process, such as the waveform data of the hardware module. This saves a lot of verification time and also makes it easy to locate bugs. + +## **Conclusion** + +Obviously, POWER9 combines CAPI technology with the CAPI-SNAP development platform to provide extreme ease of use, high efficiency, high bandwidth and low latency. If your application requires high bandwidth and low latency, then CAPI and OpenCAPI are the best choices during development. + +Using CAPI/OpenCAPI for FPGA accelerated application development is much easier than you might think. Start your learning with the examples of CAPI-SNAP, it’s time to embrace this era of heterogeneous computing! Check them out at “actions” folder in the [CAPI-SNAP git project](https://github.com/open-power/snap/). + +If you would like to know more information, please contact us at [luyong@cn.ibm.com](mailto:luyong@cn.ibm.com) or [yangdai@cn.ibm.com](mailto:yangdai@cn.ibm.com). diff --git a/content/blog/precision-medicine-barcelona-supercomputing.md b/content/blog/precision-medicine-barcelona-supercomputing.md new file mode 100644 index 0000000..fe1ec09 --- /dev/null +++ b/content/blog/precision-medicine-barcelona-supercomputing.md @@ -0,0 +1,26 @@ +--- +title: "Speeding Up Precision Medicine with Barcelona Supercomputing Center" +date: "2016-11-15" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Mateo Valero and Enric Banda, Barcelona Supercomputing Center_ + +![Barcelona Supercomputing Center joins OpenPOWER](images/BSC-blue-large-1024x255.jpg) + +The last decade has seen a worldwide increasing interest in Precision Medicine (PM). As a result, a number of computing platforms have been set up in different centres and countries following different strategies and road maps. + +A common challenge that all these initiatives face has to do with the management and analysis of genomic data. For this reason, the improvements and developments around the computing resources devoted to this goal have increased recently. The search for optimal software-hardware relationships to develop robust, efficient and accurate systems and environments for PM is one such example. + +Most public administrations have, therefore, paid attention to Precision Medicine either to give it momentum as a key part of biomedical research, such as the US, or to start introducing it into the public health system, like the United Kingdom. In Spain, the recent example comes from the Catalan Government, as recently expressed by the Department of Health in “Catalonia Crafts Strategic Framework for Personalised Medicine” published by the Personalised Medicine Coalition in its fall 2016 issue. + +The Barcelona Supercomputing Center has the conditions and skills to be a leading agent in Precision Medicine. Its Life Sciences department has long and successful experience in international genomic research projects such as those promoted by the International Cancer Genome Consortium. Its advanced research groups in high performance computing are specialists in managing big amounts of data, introducing cognitive techniques for its analysis and the development of computational technologies to apply to the most diverse scientific fields. Together, they are constructing hardware-software platforms to optimize the flows and pipelines of genomic variations analysis. It goes without saying that the BSC also has the appropriate infrastructure in terms of computing capacity as well as storage of massive amounts of data. + +Together with our experience of working closely with hospitals and clinicians, a fundamental part of the project and the one closest to patient’s interests, this combination makes our centre a perfect ecosystem for the development and application of computational approaches for clinical genomics. A recent competitive call for proposals on PM from the Catalan Government has shown that the BSC is centralizing the computing needs, as it is involved in most projects that are being carried near Barcelona. One of the most active hubs in biomedical research in Europe, the BSC is ready to tackle the opportunity to become a key element in PM projects in Spain. + +Needless to say, the complexity of the challenge makes the multi-stakeholder alliance a prerequisite. A platform is being designed within the BSC and will be shortly put in place to bind together the main actors and stakeholders in both research and health care. Having industrial technological partners willing to collaborate on the project is also a sine qua non. This is why BSC decided to join the OpenPOWER Foundation. The complementary knowledge provided by the foundation and the cooperation from IBM, with its new architectures and the huge capacity of IBM Watson, is undoubtedly a valuable asset. Pharmaceutical companies also have an essential role in this science, technology and health chain. Together, they form a chain to be woven as quickly and accurately as the health of the present and future generations deserves. + +For more information, see the [presentation we recently shared at the OpenPOWER Summit Europe](https://openpowerfoundation.org/wp-content/uploads/2016/10/3-Mateo-Barcelona-SuperComputing-Center.pdf). diff --git a/content/blog/qlogic-joins-openpower-foundation.md b/content/blog/qlogic-joins-openpower-foundation.md new file mode 100644 index 0000000..5c1eea3 --- /dev/null +++ b/content/blog/qlogic-joins-openpower-foundation.md @@ -0,0 +1,47 @@ +--- +title: "QLogic Joins OpenPOWER Foundation" +date: "2014-10-08" +categories: + - "press-releases" + - "blogs" +--- + +ALISO VIEJO, Calif., Oct. 8, 2014 (GLOBE NEWSWIRE) -- QLogic Corp. (Nasdaq:QLGC), a leading supplier of high performance network infrastructure solutions, today announced that it has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture. + +QLogic joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology to enable those responsible for data centers to rethink their approach to technology. The OpenPOWER Foundation aims at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers. The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform. + +"QLogic is looking forward to working with OpenPOWER member organizations to deliver our unique brand of innovation to the market. Today's demand for cloud-based services, along with the growing popularity of connected, mobile devices, require data center architectures to deliver incredible scalability, flexibility and performance," said Vikram Karvat, vice president of marketing, QLogic. "As a market leader in Fibre Channel and Ethernet adapters and the industry's frontrunner in innovative data center I/O solutions, QLogic will enhance functionality for highly virtualized, open standards-based, cloud and web-scale data centers based on the IBM POWER platform." + +The OpenPOWER Foundation aims to drive expansion of enterprise-class, data center hardware and software, giving the industry greater ability to innovate across the POWER platform. The OpenPOWER ecosystem enables customers to build best-in-class systems finely tuned to the POWER architecture. + +"The development model of the OpenPOWER Foundation is one based on collaboration and represents a new way of innovating around processor technology for big data and cloud," said Brad McCredie, president, OpenPOWER Foundation. "With QLogic joining this initiative we have significantly expanded our base of technology providers, as they bring a wealth of high performance networking and storage expertise, technology and innovation ability to OpenPOWER, allowing us to capitalize on emerging workloads." + +### QLogic Gen 5 Fibre Channel Adapters: Greater Security, Reliability and Scalability + +QLogic Gen 5 Fibre Channel adapters are designed to tackle high bandwidth, I/O-intensive applications, such as virtualization, streaming media, online transaction processing, big data analytics and data warehousing where reliability is critical. The underlying driver stack in QLogic Gen 5 Fibre Channel technology is proven in more than 15 million ports shipped to enterprise data centers around the world. + +The QLogic dual-port ASIC is designed with the company's unique multi-port traffic isolation feature for greater reliability and security on dual-port models. This unique architecture, with complete on-chip CPU and memory isolation across both ports of the adapter, ensures that if one port should encounter issues, the second, isolated port will continue to function securely and without interruption. With two independent channels, I/O imbalances, error recovery or firmware updates on one port do not impact the second port. This enables the adapter to offer secure, deterministically predictive and scalable port performance and increased reliability. This is essential for enterprise data centers—assuring the highest levels of availability for mission-critical applications. + +### QLogic Ethernet Adapter Solutions: High Performance with Flexibility + +By delivering high performance Ethernet with low CPU utilization, QLogic adapters excel in virtualized environments. Featuring multiple protocol offload and concurrent LAN (TCP/IP) and SAN (FCoE, iSCSI) protocol processing over a shared Ethernet link, QLogic adapters offer maximum flexibility. Ultra-low CPU utilization frees up server cycles for business-critical applications and the increased mobility of virtual machines (VMs). QLogic QConvergeConsole™ adds multi-platform, single-pane-of-glass management of FCoE, iSCSI and TCP/IP protocols for ease-of-administration and converged network deployment. + +Follow QLogic @ [twitter.com/qlogic](http://twitter.com/qlogic) + +### QLogic – the Ultimate in Performance + +[QLogic](http://www.globenewswire.com/newsroom/ctr?d=10101753&l=13&a=QLogic&u=http%3A%2F%2Fwww.qlogic.com%2F) (Nasdaq:QLGC) is a global leader and technology innovator in high performance server and storage networking connectivity products. Leading OEMs and channel partners worldwide rely on QLogic for their server and storage networking solutions. For more information, visit [www.qlogic.com](http://www.globenewswire.com/newsroom/ctr?d=10101753&l=13&a=www.qlogic.com&u=http%3A%2F%2Fwww.qlogic.com%2F). + +### Disclaimer – Forward-Looking Statements + +_This press release contains statements relating to future results of the company (including certain beliefs and projections regarding business and market trends) that are "forward-looking statements" as defined in the Private Securities Litigation Reform Act of 1995. Such forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those projected or implied in the forward-looking statements. The company advises readers that these potential risks and uncertainties include, but are not limited to: potential fluctuations in operating results; gross margins that may vary over time; unfavorable economic conditions; the stock price of the company may be volatile; the company's dependence on the networking markets served; the ability to maintain and gain market or industry acceptance of the company's products; the company's dependence on a small number of customers; the company's ability to compete effectively with other companies; uncertain benefits from strategic business combinations, acquisitions and divestitures; the ability to attract and retain key personnel; the complexity of the company's products; declining average unit sales prices of comparable products; the company's dependence on sole source and limited source suppliers; the company's dependence on relationships with certain third-party subcontractors and contract manufacturers; sales fluctuations arising from customer transitions to new products; seasonal fluctuations and uneven sales patterns in orders from customers; changes in the company's tax provisions or adverse outcomes resulting from examination of its income tax returns; international economic, currency, regulatory, political and other risks; facilities of the company and its suppliers and customers are located in areas subject to natural disasters; the ability to protect proprietary rights; the ability to satisfactorily resolve any infringement claims; a reduction in sales efforts by current distributors; declines in the market value of the company's marketable securities; changes in and compliance with regulations; difficulties in transitioning to smaller geometry process technologies; the use of "open source" software in the company's products; system security risks, data protection breaches and cyber-attacks; and the company's ability to borrow under its credit agreement is subject to certain covenants._ + +_More detailed information on these and additional factors that could affect the company's operating and financial results are described in the company's Forms 10-K, 10-Q and other reports filed, or to be filed, with the Securities and Exchange Commission. The company urges all interested parties to read these reports to gain a better understanding of the business and other risks that the company faces. The forward-looking statements contained in this press release are made only as of the date hereof, and the company does not intend to update or revise these forward-looking statements, whether as a result of new information, future events or otherwise._ + +_QLogic and the QLogic logo are registered trademarks of QLogic Corporation. Other trademarks and registered trademarks are the property of the companies with which they are associated._ + +### CONTACT: + +Media Contact: Steve Sturgeon QLogic Corporation 858.472.5669 steve.sturgeon@qlogic.com + +Investor Contact: Doug Naylor QLogic Corporation 949.542.1330 doug.naylor@qlogic.com diff --git a/content/blog/recap-cdac-three-day-workshop-on-openpower-for-hpc-and-big-data-analytics.md b/content/blog/recap-cdac-three-day-workshop-on-openpower-for-hpc-and-big-data-analytics.md new file mode 100644 index 0000000..b12ad9b --- /dev/null +++ b/content/blog/recap-cdac-three-day-workshop-on-openpower-for-hpc-and-big-data-analytics.md @@ -0,0 +1,22 @@ +--- +title: "Recap: CDAC Three Day Workshop on OpenPOWER for HPC and Big Data Analytics" +date: "2016-09-27" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Dr. VCV Rao, Centre for Development of Advanced Computing_ + +![CDAC Logo](images/cdac.preview-300x228.png) + +Recently, the [Centre for Development of Advanced Computing (CDAC)](http://www.cdac.in/) in India held a three-day workshop where presenters from various industries examined the progress and opportunity to leverage OpenPOWER technology. The objective of the workshop was to understand performance and scalability of high performance computing (HPC) application kernels and Big Data processing, and data science applications on RISC-based IBM POWER8 systems with GPUs as a part of the OpenPOWER Foundation. + +Representatives from IBM, Mellanoax and CDAC discussed the POWER8 architecture, application performance compared to x86 systems, and how easily we can port the applications running on x86 to POWER8. They also discussed the Power architecture’s roadmap, looking ahead to updates and enhancements. Finally, we discussed accelerator technology like GPUs and FPGAs. With technology like NVIDIA NVLink and CAPI, the Foundation is very well positioned to harness the power of acceleration. + +In the three-day agenda workshop, we learned a lot about high performance computing, in particular how to make use of NVIDIA GPUs in parallel programming to improve the performance of the HPC applications. We also discussed how we can achieve greater bandwidth by using Mellanox Interconnect and expand our capabilities in FPGA programming. + +A lot of time was spent discussing how Big Data applications can scale with POWER8 and GPUs. To answer that question, this workshop provided us with lots of different compiler toolkits, various kinds of libraries, like CUDA for GPUs, and writing and testing parallel code like MPI and OpenMP. + +CDAC is dedicated to advancing Supercomputing research and workshops like these help us to bring together discussion around many important topics. To learn more about CDAC and our work in the OpenPOWER Foundation, join us at future workshops by registering on our [Events Page](http://www.cdac.in/index.aspx?id=events).  We look forward to the next workshop. Let us know what you would like to see on the agenda in the comments. diff --git a/content/blog/red-hat-joins-openpower-foundation-adds-open-source-leadership-expertise-community-driven-hardware-innovation.md b/content/blog/red-hat-joins-openpower-foundation-adds-open-source-leadership-expertise-community-driven-hardware-innovation.md new file mode 100644 index 0000000..57133eb --- /dev/null +++ b/content/blog/red-hat-joins-openpower-foundation-adds-open-source-leadership-expertise-community-driven-hardware-innovation.md @@ -0,0 +1,29 @@ +--- +title: "Red Hat Joins the OpenPOWER Foundation, adds open source leadership and expertise to community-driven hardware innovation" +date: "2017-02-21" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +As part of our commitment to delivering open technologies across many computing architectures, today Red Hat has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture, at the Platinum level. While we already do build and support open technologies for the POWER architecture, the OpenPOWER Foundation is committed to an open, community-driven technology-creation process - something that we feel is critical to the continued growth of open collaboration around POWER. + +As a participant in the OpenPOWER community and a member of the Board of Directors (where we are currently represented by Scott Herold), we plan to focus on helping to create open source software for POWER-based architectures, offering more choice, control and flexibility to developers working on hyperscale and cloud-based data centers. Additionally, we’re excited to work with other technology leaders on advanced server, networking, storage and I/O acceleration technologies, all built on a set of common, open standards. + +We feel that open standards, like those being utilized by OpenPOWER, are critical to enterprise IT innovation, offering a common set of guidelines for the integration, implementation and security of new technologies. Modern standards bodies such as OpenPOWER and others seek to democratize guidelines across a broad, inclusive community, focusing on agility and providing a common ground for emerging technology. Red Hat is a strong proponent of open standards across the technology stack, participating in groups that cover the emerging software ([OCI](https://www.opencontainers.org/), [CNCF](https://www.cncf.io/)) as well as hardware ([CCIX](http://www.ccixconsortium.com/), [GenZ](http://genzconsortium.org/)) stacks. + +The development efforts of the OpenPOWER Foundation benefit many partners that we already work with, and we look forward to increased collaboration in an open, transparent environment. We’re also looking to support many other emerging technical areas of the community. These include machine learning and artificial intelligence, data platforms and analytics, as well as cloud and container deployments. + +We’re pleased to be a part of OpenPOWER, and look forward to helping craft community-driven collaborative designs that broaden customer technology choices across the breadth of enterprise IT. + +**From our partners** + +_Scot Schultz, Director, HPC and Technical Computing, Mellanox_ + +“As the technology stack becomes increasingly more complex, deploying virtual machines, cloud services and bare metal technologies must all interact simultaneously. It’s critical that we have a foundational set of standards that seamlessly work across hardware architectures. The OpenPOWER Foundation helps to set these standards for POWER systems, and Red Hat is an excellent addition to the Foundation’s leadership, both as a partner and for their extensive work in developing community-driven standards.” + +_Ken King, general manager, OpenPOWER, IBM_ + +“The development model of the OpenPOWER Foundation is one that elicits collaboration and represents a new way in exploiting and innovating around processor technology. POWER architecture is well tailored for many traditional and new applications, enabling OpenPOWER Foundation members like Red Hat to add their own innovations on top of the hardware technologies or create new solutions that capitalize on emerging workloads such as cognitive applications like AI and deep learning.” diff --git a/content/blog/redefining-developer-event.md b/content/blog/redefining-developer-event.md new file mode 100644 index 0000000..d3f6d7c --- /dev/null +++ b/content/blog/redefining-developer-event.md @@ -0,0 +1,30 @@ +--- +title: "Redefining the Developer Event" +date: "2017-04-13" +categories: + - "blogs" +tags: + - "featured" +--- + +By Randall Ross, Ubuntu Community Manager with Canonical (04.13.17) + +We’ve all been to “those other” developer events: Sitting in a room watching a succession of never-ending slide presentations. Engagement with the audience, if any, is minimal. We leave with some tips and tools that we might be able to put into practice, but frankly, we attended because we were supposed to. The highlight was actually the opportunity to connect with industry contacts. + +Key members of the OpenPOWER Foundation envisioned something completely different in their quest to create the perfect developer event, something that has never been done before: What if developers at a developer event actually spent their time developing? + +The OpenPOWER Foundation is an open technical membership organization that enables its member companies to provide customized, innovative solutions based on POWER CPU processors and system platforms that encourage data centers to rethink their approach to technology. The Foundation found that ISVs needed support and encouragement to develop OpenPOWER-based solutions and take advantage of other OpenPOWER Ready components. The demand for OpenPOWER solutions has been growing, and ISVs needed a place to get started. + +To solve this challenge, The OpenPOWER Foundation created the first ever Developer Congress, a hands-on event that will take place May 22-25 in San Francisco. The Congress will focus on all aspects of full stack solutions — software, hardware, infrastructure, and tooling — and developers will have the opportunity to learn and develop solutions amongst peers in a collaborative and supportive environment. + +The Developer Congress will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. Experts in the latest hot technologies such as deep learning, machine learning, artificial intelligence, databases and analytics, and cloud will be on hand to provide one-on-one advice as needed. + +As Event Co-Chair, I had an idea for a different type of event. One where developers are treated as “heroes” (because they are — they are the creators of solutions). My Event Co-Chair Greg Phillips, OpenPOWER Content Marketing Manager at IBM, envisioned an event where developers will bring their laptops and get their hands dirty, working under the tutelage of technical experts to create accelerated solutions. + +The OpenPOWER Developer Congress is designed to provide a forum that encourages learning from peers and networking with industry thought leaders. Its format emphasizes collaboration with partners to find complementary technologies, and provides on-site mentoring through liaisons assigned to help developers get the most out of their experience. + +Support from the OpenPOWER Foundation doesn’t end with the Developer Congress. The OpenPOWER Foundation is dedicated to providing its members with ongoing support in the form of information, access to developer tools and software labs across the globe, and assets for developing on OpenPOWER. + +The OpenPOWER Foundation is committed to making an investment in the Developer Congress to provide an expert-rich environment that allows attendees to walk away three days later with new skills, new tools, and new relationships. As Thomas Edison said, “Opportunity is missed by most people because it is dressed in overalls and looks like work.” So developers, come get your hands dirty. + +Learn more about the [OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress) diff --git a/content/blog/reflections-on-migrating-ibm-app-genomic-workflow-acceleration-to-ibm-power8.md b/content/blog/reflections-on-migrating-ibm-app-genomic-workflow-acceleration-to-ibm-power8.md new file mode 100644 index 0000000..1d80e23 --- /dev/null +++ b/content/blog/reflections-on-migrating-ibm-app-genomic-workflow-acceleration-to-ibm-power8.md @@ -0,0 +1,28 @@ +--- +title: "Reflections on Migrating IBM APP Genomic Workflow Acceleration to IBM POWER8" +date: "2015-01-16" +categories: + - "blogs" +--- + +**Author:** [Chandler Wilkerson](https://www.linkedin.com/profile/view?id=13493892&authType=NAME_SEARCH&authToken=jXYu&locale=en_US&srchid=32272301421439110454&srchindex=1&srchtotal=2&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421439110454%2CVSRPtargetId%3A13493892%2CVSRPcmpt%3Aprimary), Rice University + +### Objective + +To describe the challenges and lessons learned while installing the IBM Power Ready Platform for Genomic Workflow Acceleration on new IBM POWER8 hardware. + +### Abstract + +Migrating any workflow to a new hardware platform generates challenges and requires adaptability. With the transition from POWER7 to POWER8, the addition of PowerKVM obviates the need for VIOS and provides the opportunity to manage virtual machines on the POWER platform in a much more Linux-friendly manner. In addition, a number of changes to Red Hat’s Enterprise Linux operating system between versions 6 and 7 (7 being required for full POWER8 support at the time of this project’s start) have required modifying the standard processes outlined in the tested IBM solution. This presentation will take attendees through the growing pains and lessons learned while migrating a complex system to a new platform. + +### Bio + +Chandler has taken the lead on all IBM POWER related projects within Rice’s Research Computing Support Group since 2008, including a pre-GA deployment of POWER7 servers that turned into a 48-node cluster, Blue BioU. The RCSG team maintains a collection of different HPC resources purchased through various grants, and is experienced in providing as uniform a user experience between platforms as possible. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Wilkerson-Chandler_OPFS2015_RiceUniversity_031115_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/revolution-comes-europe.md b/content/blog/revolution-comes-europe.md new file mode 100644 index 0000000..3145e95 --- /dev/null +++ b/content/blog/revolution-comes-europe.md @@ -0,0 +1,39 @@ +--- +title: "The Revolution Comes to Europe!" +date: "2016-10-03" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Amanda Quartly, OpenPOWER Alliances Europe, IBM_ + +![opf_banner_004](images/opf_banner_004.png) + +The only constant to being involved with the OpenPOWER Foundation is change and innovation, and there is plenty happening! For instance, OpenPOWER members IBM and NVIDIA just launched a new set of servers [built for the cognitive and AI-driven age](https://www.ibm.com/blogs/systems/ibm-nvidia-present-nvlink-server-youve-waiting/). Now it's time for the focus to turn to Europe with the upcoming OpenPOWER European Summit, beginning 26 October through 28 October. + +Our membership and activities in Europe have continued to grow along with our efforts all over the world! This is your chance to find out the latest and hear the latest announcements from our members on how they’re driving the OpenPOWER ecosystem. + +Register for free today to join us for: + +- 20 keynotes from STFC Hartree Centre, AT&T, OpenStack, Kolab, NVidia, Mellanox, Kinetica, E4 and more to be announced. +- 22 breakout sessions featuring OPF members and OpenStack on OpenPOWER demonstrations. +- Numerous workgroup, Birds of a Feather and Panel sessions. +- The Rebel Alliance Reception on Thursday night to network with other OpenPOWER revolutionaries! + +We will highlight OpenPOWER adoption stories, new European members and new innovations based on OpenPOWER systems. Plus hear from developers and ISVs on what they’re doing, and be there for the announcement of the winners of the inaugural OpenPOWER Developer Challenge. And to top it all off, attendance is free! + +# For more details visit the official OpenPOWER Summit Europe page here: [https://openpowerfoundation.org/openpower-summit-europe/](https://openpowerfoundation.org/openpower-summit-europe/). + +  + +# Register for free on our Eventbrite: [http://bit.ly/2bd3dai](http://bit.ly/2bd3dai). + +  + +# For sponsorship opportunities, fill out the Sponsor Application: [https://goo.gl/forms/ceb24f6yZC2HjuEP2](https://goo.gl/forms/ceb24f6yZC2HjuEP2). + +  + +The OpenPOWER Foundation is pleased to be working with the OpenStack Summit. As we market these events together we recommend that you purchase a Full Pass or the Keynote and Markeplace Pass to be able to attend the OpenStack Summit. You can purchase an OpenStack pass on their webpage: [https://www.openstack.org/summit/](https://www.openstack.org/summit/). diff --git a/content/blog/sap-fosters-open-ecosystem-for-driving-customer-innovation.md b/content/blog/sap-fosters-open-ecosystem-for-driving-customer-innovation.md new file mode 100644 index 0000000..b64bba6 --- /dev/null +++ b/content/blog/sap-fosters-open-ecosystem-for-driving-customer-innovation.md @@ -0,0 +1,11 @@ +--- +title: "SAP Fosters Open Ecosystem for Driving Customer Innovation" +date: "2014-06-04" +categories: + - "press-releases" + - "blogs" +tags: + - "sap" +--- + +ORLANDO — Working closely with an active open ecosystem, SAP AG empowers customers, partners, startups and developers worldwide to innovate easily on top of the SAP HANA platform. By fostering strategic collaborations with partners, including Red Hat, IBM, HP and VMware, as well as startups, SAP plans to offer customers broader choice, ease of deployment and simplified IT environments, empowering them to transform their businesses by leveraging the advanced capabilities of SAP HANA. The announcement was made at SAPPHIRE NOW, being held June 3-5, 2014, in Orlando, Florida. diff --git a/content/blog/sc14-openpower-and-the-state-of-supercomputing.md b/content/blog/sc14-openpower-and-the-state-of-supercomputing.md new file mode 100644 index 0000000..944c1e0 --- /dev/null +++ b/content/blog/sc14-openpower-and-the-state-of-supercomputing.md @@ -0,0 +1,20 @@ +--- +title: "SC14: OpenPOWER and the State of Supercomputing" +date: "2014-11-26" +categories: + - "blogs" +tags: + - "featured" +--- + +By Ken King, GM of OpenPOWER Alliances, IBM + +Last week, SC14 brought together the brightest minds and organizations in the high-performance computing (HPC) industry. It was truly exciting to discuss with these experts some of the business challenges HPC is tackling today – from medical research to investment banking to weather forecasting – and possibilities for the future. IBM and the OpenPOWER Foundation shared our vision for the future of [technical computing](http://www-03.ibm.com/systems/technicalcomputing/), in which open innovation leads to accelerated and more compelling development of HPC systems. + +IBM kicked off the show highlighting our recently announced $325M contract award from the [U.S. Department of Energy](http://www-03.ibm.com/press/us/en/pressrelease/45387.wss) (DOE) to develop and deliver advanced “data centric” supercomputing systems, which will advance discovery in science, engineering and national security. In a move that could shake up the high performance computing industry, IBM’s new OpenPOWER-based systems use a data centric approach and put computing power everywhere data resides, minimizing data in motion, energy consumption, and cost/performance. These systems are the debut of OpenPOWER innovation in supercomputing and the result of the collaboration of OpenPOWER Foundation members, including IBM, NVIDIA and Mellanox. + +The DOE project is just the beginning when it comes to how IBM and the OpenPOWER Foundation plan to revolutionize supercomputing. The fact is that traditional supercomputing approaches are no longer keeping up with the enormous growth of big data and Moore's Law can no longer be relied on for historical performance gains; the industry needs open collaboration to develop the data centric, high performance systems required to tackle today’s biggest challenges. That’s where the OpenPOWER community comes in, as a force of material innovation vital to shaping the future of technical computing. With more than 70 companies, including NVIDIA, Mellanox, Altera and Nallatech, the OpenPOWER Foundation is incorporating advanced technology like GPUs, NICS and FPGA cards, all of which have the potential to transform today’s supercomputing capabilities in an open, integrated fashion. With these possibilities, the future of supercomputing is becoming more open than ever – and the OpenPOWER Foundation is leading the way. + +In addition to the forward-thinking that was on display at SC14, we were also excited to see that the HPC industry is recognizing the disruptive potential of the combination of IBM Power Systems and OpenPOWER innovations. IBM won several HPCwire awards, including an Editor’s Choice for [Best HPC Server Product or Technology for IBM](http://www.hpcwire.com/2014-hpcwire-readers-choice-awards/12/) POWER8 processor-based systems, recognizing Power’s superior performance for HPC systems. The OpenPOWER Foundation also won an Editor’s Choice for [Top 5 New Products or Technologies to Watch](http://www.hpcwire.com/2014-hpcwire-readers-choice-awards/23/) for its potential to transform how HPC systems are built in the future. + +We were pleased have an impact on the annual SuperComputing event and to share our vision for the future of technical computing. We look forward to returning in the years to come to further showcase how the new data centric paradigm and open collaborative innovation in supercomputing (via OpenPOWER) is transforming the industry. diff --git a/content/blog/scaling-apache-spark.md b/content/blog/scaling-apache-spark.md new file mode 100644 index 0000000..0fa4d1b --- /dev/null +++ b/content/blog/scaling-apache-spark.md @@ -0,0 +1,75 @@ +--- +title: "Scaling-up Apache Spark" +date: "2017-12-06" +categories: + - "blogs" +tags: + - "capi" + - "big-data" + - "supercomputing" + - "high-powered-computing" + - "apache-spark" + - "power-8" + - "power-9" + - "opencapi" + - "big-data-analytics" +--- + +**By Ahsan Javed Awan, Research Associate, Imperial College London** + +I recently completed my doctoral thesis, in which I characterize the performance of in-memory data analytics with Apache Spark on scale-up servers. + +The sheer increase in the volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark defines the state of the art in big data analytics platforms for exploiting data-flow and in-memory computing and for exhibiting superior scale-out performance on the commodity machines, little effort has been devoted to understanding the performance of in-memory data analytics with Spark on modern scale-up servers. + +Scale-out big data processing frameworks fail to fully exploit the potential of modern off-the-shelf commodity machines (scale-up servers) and require modern servers to be augmented with programmable accelerators near-memory and near-storage. + +## **The Practicalities of Near Data Accelerators Augmented Scale-up Servers for In-Memory Data Analytics** + +Traditionally, cluster computing frameworks like Apache Flink, Apache Spark and Apache Storm are being increasingly used to run real-time streaming analytics. These frameworks have been designed to use the cluster of commodity machines. Keeping in view the poor multi-core scalability of such frameworks, we hypothesize that scale-up machines augmented with coherently attached FPGA can deliver enhanced performance for in-memory big data analytics. + +- ### **High level design** + + +The figure below shows our high-level solution. The naive approach of offloading the hot-spot functions identified by profiler like Intel Vtune does not work here, as our profiling experience with Apache Spark and Apache Flink reveals, there is no single hot-spot function that contributes to more than 50% of the total execution time, and instead there are different hot-spot functions, each contributing up to 10-15% of the total execution time. + +Other ways of accelerating big data processing frameworks like Apache Spark are offloading the tasks or offloading the entire algorithm. By comparing previous studies, we find that offloading the entire algorithm incurs less JVM-FPGA communication overhead than offloading the individual tasks. Thus, we choose offloading the entire algorithm outside the Spark-framework, even though the algorithm is still written following the MapReduce programming model. The mapping decisions between CPU and FPGA are taken outside the JVM. + +- ### **CAPI specific optimization** + + +CAPI allows to couple the hardware and software threads in a very fine-grained manner. Shared virtual memory is the key innovation of the OpenCL standard and allow host and device platforms to operate on shared data-structures using the same virtual address space. We pass the pointers to the CAPI accelerators to read the data directly from the Java heap, which removes the overhead of pinned buffers on host memory. Due to CAPI, the accelerators have access to the whole system memory of TB scale and thus accelerators can work on big data sets. + +- ### **HDL vs. HLL** + + +The main obstacle for the adoption of FPGAs in big data analytics frameworks is the high programming complexity of hardware description language (HDL). In recent years, there have been several efforts from the main FPGA and system vendors to allow users to program FPGA using high-level synthesis (HLS), like OpenCL or specific-domain languages like OpenSPL. Although HDLs can provide the higher speedup, the low programming complexity of HLL makes them very attractive in the big data community. We use SDSoC to generate the hardware accelerators.  With the support of OpenCAPI in SDAccel, it would even become easier to integrate customized hardware accelerators with the Power 9 processors. + +## **Contrasts from existing literature** + +Our work differs from existing literature in five ways: + +1. We focus on hiding the data communication overhead by offloading the entire algorithm and exploiting data-reuse on the FPGA side. In our work, data is read from the Java heap for optimized C++ processing on the CPUs and hardware acceleration of the FPGAs and final results are copied back into Spark using memory mapped byte buffers. +2. We exploit CAPI to further reduce the communication cost. +3. We use co-processing on the CPUs as well as FPGA to finish all the map tasks as quickly as possible. + +## **Recommendations to improve performance of Spark on a scale-up server** + +Our work finds that performance bottlenecks in Spark workloads on a scale-up server are frequent data accesses to DRAM, thread level load imbalance, garbage collection overhead and wait time on file I/O. To improve the performance of Spark workloads on a scale-up server, we make the following recommendations: + +1. Spark users should prefer DataFrames over RDDs while developing Spark applications and input data rates should be large enough for real-time streaming analytics to exhibit better instruction retirement. +2. Spark should be configured to use executors with memory size less than or equal to 32GB and restrict each executor to use NUMA local memory. +3. GC scheme should be matched to the workload. +4. Next-line L1-D and adjacent cache line L2 prefetchers should be turned off and DDR3 speed should be configured to 1333 MT/s. +5. Hyper-threading should be turned on, SMT-4 mode in Power 8/9 processors is a sweet spot for Spark workloads. + +## **Future Work:** + +The recently released IBM Power System AC922 features Power9, NVLink, PCIe-Gen4 and OpenCAPI. The seamless integration of GPUs, FPGAs and CPUs in a single scale-up server clearly sets the stage of scale-in clusters (fewer powerful nodes connected over high-speed network) and we will explore the mapping of iterative MapReduce workloads like Apache Spark MLlib on such systems. + +## **Further Reading**: + +Awan, A. J. (2017). _Performance Characterization and Optimization of In-Memory Data Analytics on a Scale-up Server_ (Doctoral dissertation, KTH Royal Institute of Technology, Sweden and Universitat Politecnica de Catalunya, Spain) + +[https://www.academia.edu/35196109/Performance\_Characterization\_and\_Optimization\_of\_In-Memory\_Data\_Analytics\_on\_a\_Scale-up\_Serv](https://www.academia.edu/35196109/Performance_Characterization_and_Optimization_of_In-Memory_Data_Analytics_on_a_Scale-up_Server) + +https://databricks.com/session/near-data-computing-architectures-apache-spark-challenges-opportunities diff --git a/content/blog/servergy-builds-the-bridge-between-open-compute-and-openpower.md b/content/blog/servergy-builds-the-bridge-between-open-compute-and-openpower.md new file mode 100644 index 0000000..36e48ac --- /dev/null +++ b/content/blog/servergy-builds-the-bridge-between-open-compute-and-openpower.md @@ -0,0 +1,9 @@ +--- +title: "Servergy Builds the Bridge between Open Compute and OpenPOWER" +date: "2014-05-18" +categories: + - "press-releases" + - "blogs" +--- + +DUBAI, United Arab Emirates--(BUSINESS WIRE)--Cleantech IT innovation and design firm Servergy, Inc. announced today at an IDC Open Compute Project event held in Dubai at the Burj Al Arab that it has partnered with the University of Texas San Antonio (UTSA) Cloud and Big Data Laboratory, and only North America Open Compute Lab, to create an open innovation bridge between Open Compute Project and OpenPOWER for the benefit of both communities. diff --git a/content/blog/servergy-joins-the-openpower-foundation.md b/content/blog/servergy-joins-the-openpower-foundation.md new file mode 100644 index 0000000..5014ccf --- /dev/null +++ b/content/blog/servergy-joins-the-openpower-foundation.md @@ -0,0 +1,9 @@ +--- +title: "Servergy Joins the OpenPOWER Foundation" +date: "2014-03-24" +categories: + - "press-releases" + - "blogs" +--- + +DALLAS–(BUSINESS WIRE)–Cleantech IT innovations company Servergy, Inc., announced today the company has joined IBM, Google, Mellanox, NVIDIA, Samsung Electronics, Tyan and Suzhou PowerCore Technology Company in the OpenPOWER Foundation – an open development alliance that makes IBM’s POWER microprocessor architecture available under license. Servergy will collaborate within the Foundation on opportunities leveraging Servergy’s clean and green technology on Power architecture with scale-up and scale-out capability for Big Data, caching, streaming, cloud workload, and distributed storage application in data centers. diff --git a/content/blog/servergy-partners-with-university-of-texas-san-antonio-utsa-cloud-and-big-data-lab-to-create-first-open-compute-lab-for-power8.md b/content/blog/servergy-partners-with-university-of-texas-san-antonio-utsa-cloud-and-big-data-lab-to-create-first-open-compute-lab-for-power8.md new file mode 100644 index 0000000..e0bb499 --- /dev/null +++ b/content/blog/servergy-partners-with-university-of-texas-san-antonio-utsa-cloud-and-big-data-lab-to-create-first-open-compute-lab-for-power8.md @@ -0,0 +1,9 @@ +--- +title: "Servergy Partners with University of Texas San Antonio (UTSA) Cloud and Big Data Lab to Create First Open Compute Lab for Power8" +date: "2014-05-22" +categories: + - "press-releases" + - "blogs" +--- + +SAN ANTONIO--(BUSINESS WIRE)--Servergy and University of Texas, San Antonio announced an open innovation bridge and new lab, between IBM’s OpenPOWER and the Open Compute community, to accelerate the pace of open innovation for the benefit of both communities and the industry at large. diff --git a/content/blog/setting-high-standards-for-openpower-hardware-architecture.md b/content/blog/setting-high-standards-for-openpower-hardware-architecture.md new file mode 100644 index 0000000..a0af5ee --- /dev/null +++ b/content/blog/setting-high-standards-for-openpower-hardware-architecture.md @@ -0,0 +1,26 @@ +--- +title: "Setting High Standards for OpenPOWER Hardware Architecture" +date: "2015-12-09" +categories: + - "blogs" +--- + +  + +[![33601413](images/33601413.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/33601413.jpg)By Michael Gschwind, Chief Architect & Senior Manager, Power System Architecture, IBM + +When we founded OpenPOWER to create a new inclusive ecosystem built around collaborative innovation, we knew that innovation needed to be built around a core of common standards.  To ensure interoperability of new technologies and to give assurance to hardware manufacturers, software developers, partners and customers that the choice for OpenPOWER was an investment into the future: a choice for a future with growing performance, growing markets and interoperable solutions built to last. + +At the core of this open ecosystem, we needed a platform of uncompromising quality and compatibility across hardware and software upon which to build transformative solutions for a connected planet.  We were fortunate to have an unparalleled breath of skills among the founding members to set the course.  Each company had revolutionized their respective field or fields: computer graphics, accelerators, high-speed networking, innovative system design, system virtualization, modern computer architecture and design, internet content services, hyperscale data centers, cloud computing,… + +To create a common reference point for the entire ecosystem, together we created the first three OpenPOWER workgroups, for Hardware Architecture, System Software, and Architecture Compliance.  We tasked these groups with identifying and standardizing the fundamental system functions that would serve as the common reference for the ecosystem. + +A year has passed since the creation of the first OpenPOWER workgroups, and these workgroups have been busy setting the standards that will enable the ecosystem to grow even more.  As the Chair of the Hardware Architecture Workgroup, I am particularly delighted to share the availability of the Hardware Architecture Work Group Specification Public Review Draft for the first generation of OpenPOWER hardware architecture, and I would like to solicit your review and your feedback: + +**[OpenPOWER ISA Profile Public Review Draft](https://members.openpowerfoundation.org/document/dl/500)**: The purpose of the OpenPOWER Instruction Set Architecture (ISA) Profile specification is to describe the categories of the POWER ISA Version 2.07 B that are required in the OpenPOWER chip architecture for IBM POWER8 systems. [Click here to submit a comment or subscribe to the comment email distribution list.](http://lists.publicreview.openpowerfoundation.org/mailman/listinfo/isa_profile_review) + +[**IODA2 Specification Public Review Draft**](https://members.openpowerfoundation.org/document/dl/328): The purpose of the I/O Design Architecture, version 2 (IODA2) specification is to describe the chip architecture for key aspects of PCIe-based host bridge (PHB) designs for IBM POWER8 systems. [Click here to submit a comment or subscribe to the comment email distribution list.](http://lists.publicreview.openpowerfoundation.org/mailman/listinfo/ioda2_review) + +**[CAIA Specification Public Review Draft](https://members.openpowerfoundation.org/document/dl/615)**: This document defines the Coherent Accelerator Interface Architecture (CAIA) for the IBM® POWER8 ® systems. The information contained in this document allows various CAIA-compliant accelerator implementations to meet the needs of a wide variety of systems and applications. Compatibility with the CAIA allows applications and system software to migrate from one implementation to another with minor changes. + +The commenting period for all three Hardware Architecture Workgroup standards track documents closes on January 10, 2016.  I want to take this opportunity to thank the over 100 members of the workgroup for their ongoing active participation and thoughtful contributions in defining these proposed OpenPOWER specifications. diff --git a/content/blog/skymind-machine-learning-notebooks-production.md b/content/blog/skymind-machine-learning-notebooks-production.md new file mode 100644 index 0000000..27b9b24 --- /dev/null +++ b/content/blog/skymind-machine-learning-notebooks-production.md @@ -0,0 +1,33 @@ +--- +title: "Skymind: Machine Learning from Notebooks to Production" +date: "2017-12-15" +categories: + - "blogs" +tags: + - "openpower" + - "deep-learning" + - "machine-learning" + - "openpower-foundation" + - "artificial-intelligence" + - "ai" + - "skymind" + - "adam-gibson" +--- + +By Adam Gibson, Founder and Chief Technology Officer, Skymind + +The world of artificial intelligence is developing at an ever-increasing pace. + +At Skymind, our mission is to make deep learning simple and accessible to enterprises. We’re tackling some of the most advanced problems in data analysis and machine intelligence and building AI systems for enterprises allowing them to build and deploy neural networks at a large scale. + +[![](images/skymind-1024x403.png)](https://openpowerfoundation.org/wp-content/uploads/2017/12/skymind.png) + +We’ve built the [Skymind Intelligence Layer (SKIL)](https://skymind.ai/platform), a software distribution for powering AI clusters. We provide the ability to bridge data science workflows to production deployments with production grade monitoring, scheduling and integrations needed to connect to enterprises’ different data workflows. This allows for the easy deployment of TensorFlow and other deep learning frameworks into production environments. We’re also the company behind [deeplearning4j](https://deeplearning4j.org/), an open-source, distributed, deep learning library for the Java Virtual Machine (JVM). + +Our current focus is to scale our product portfolio on top of SKIL, allowing enterprises to put AI to use and reap its benefits without worrying about deployment within specific verticals, like financial services, health care and robotics. + +## **Skymind and OpenPOWER Foundation** + +We decided to join the OpenPOWER Foundation to press innovation in hardware and AI. Joining the group will enable us to provide more powerful AI solutions and allow the production use of tools like NVLink. Deploying AI solutions on Spark and on Power will help us advance even further. + +To learn more about Skymind, visit our [website](https://skymind.ai/) or follow us on [Twitter](https://twitter.com/deeplearning4j), [Facebook](https://www.facebook.com/deeplearning4j/) and [LinkedIn](https://www.linkedin.com/company/skymind-io/). diff --git a/content/blog/spotted-owls-openpower.md b/content/blog/spotted-owls-openpower.md new file mode 100644 index 0000000..558ef99 --- /dev/null +++ b/content/blog/spotted-owls-openpower.md @@ -0,0 +1,36 @@ +--- +title: "Spotted owls and OpenPower" +date: "2018-11-26" +categories: + - "blogs" +tags: + - "featured" +--- + +_[This post was originally published by IBM.](https://developer.ibm.com/linuxonpower/2018/11/21/spotted-owls-and-openpower/)_ + +The U.S. Forest Service and Oregon State University’s department of Fisheries and Wildlife has monitored northern spotted owls in the Pacific Northwest since the early 1990s under the Northwest Forest Plan. Historically, monitoring has involved broadcasting recorded owl calls in the hope of eliciting a response from real owls. However, as spotted owl populations have declined due to habitat loss and competition from invasive barred owls, this technique has become less and less reliable. We conducted a pilot study in 2017 in which we placed autonomous recording units (ARUs) at 150 sites across Oregon and Washington. By the end of the field season the ARUs had collected 68 terabytes of audio data representing almost 162,000 hours of continuous sound (figure step 1). Manually locating and tagging relevant sounds within these recordings is costly and time-consuming, creating a serious lag between data collection and analysis and limiting the feasibility of large-scale studies. As the Forest Service continues to deploy more and more ARUs, we needed to automate the review process and create a data processing pipeline that would scale up alongside the project’s field component and be flexible enough to incorporate additional target species as needed. + +In cooperation with Oregon State University’s Center for Genome Research and Biocomputing (CGRB), we developed a convolutional neural network (CNN) that can accurately identify calls from several owl species by analyzing spectrograms generated from short sound clips. Our data processing pipeline involves segmenting raw WAV files into 12-second clips (step 2), generating spectrograms from the clips (step 3), and feeding these into the CNN, which uses filters to detect patterns within the spectrogram and outputs the probability that each spectrogram contains one or more of our target species (step 4). The segmentation and spectrogram generation steps turned out to be the most time-intensive stage; even a single site might produce ~500 GB of data, which could take several days to process using consumer-grade hardware with standard PCI based GPUs. We needed to speed up the process if we wanted to overcome the quantities of data generated from multiple sites over longer periods of time. + +IBM offered the opportunity to test our CNN on their new OpenPOWER based Power8 systems with GPGPUs integrated with the main system board. We were excited to see we did not need to recompile our Linux based tools and they just worked the same as on the hardware we were already using. These machines cut our processing time from **30** hours to just over **6** hours. This represented a **5x** speedup when using the IBM hardware. We have provided a table with some average run times for small data sets to show the increased speed we found. As we move to the cloud, we plan to use tools like Kinetica to visualize the information quickly and Nimbix to help process the data efficiently using IBM OpenPOWER based systems with GPGPU support. + +| **Run Type** | **K80 on x86** | **P100 on IBM OpenPOWER** | +| --- | --- | --- | +| _Training the CNN_ | 30:06:13 | 06:18:07 | +| _Generating predictions_ | 00:45:35 | 00:08:44 | +| _Full data processing_ | 01:22:00 | 00:39:50 | + +[![](images/IBM-blog-post-graphic.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/IBM-blog-post-graphic.png) + ++++ + +Authored by: + +_**Zack Ruff**_ _ORISE Post-Graduate Fellow working with Damon Lesmeister in U.S. Forest Service Oregon State University_ + +_**Bharath K. Padmaraju**_ _Undergraduate Computational Scientist_ _Oregon State University_ + +_**Christopher M. Sullivan**_ _Assistant Director for Biocomputing_ _Center for Genome Research and Biocomputing_ _Oregon State University_ + +_**Damon Lesmeister**_ _Research Wildlife Biologist and Team Leader _ _U.S. Forest Service and _ _Pacific Northwest Research Station Wildlife Ecology Team_ diff --git a/content/blog/sqream-openpower-summit-europe.md b/content/blog/sqream-openpower-summit-europe.md new file mode 100644 index 0000000..0a43878 --- /dev/null +++ b/content/blog/sqream-openpower-summit-europe.md @@ -0,0 +1,32 @@ +--- +title: "Opening the POWER of Massive Data: SQream at OpenPOWER Summit Europe" +date: "2018-10-25" +categories: + - "blogs" +tags: + - "featured" +--- + +By: David Leichner, chief marketing officer, SQream + +Earlier this month, SQream was invited to participate in [OpenPOWER Summit Europe](https://sqream.com/event/openpower-summit-europe/). Being the top European event for anyone in the OpenPOWER ecosystem, I was thrilled to take part. The event was extremely high-quality, with a great mix of developers, engineers, executives, and researchers. The quality of the audience was second only to the terrific lineup of top-notch speakers and presentations. + +VP Product Ayelet Heyman and I gave a presentation titled **IBM POWER9, NVIDIA GPU and SQREAM DB: Tackling the Challenges of Massive Data Analytics**. In the session, we took a look at how combining modular components and sophisticated algorithms can deliver fast analytics performance, by limiting the effect of I/O on data-intense queries and models. We showed how arranging data for the GPU, combined with fast GPU compression, metadata mapping and several other techniques can accelerate real database physical operators. We also looked at how the combination of high throughput processors like IBM POWER9 and SQream's GPU data warehouse delivers an analytics platform that breaks down these barriers. + +[![SQream at OpenPOWER Summit Europe 2018](images/SQream-at-OpenPOWER-Summit-Europe-2018.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/10/SQream-at-OpenPOWER-Summit-Europe-2018.jpg) + +We demonstrated how the increased performance and processing power available in POWER9 combined with SQream's advanced algorithms for fast data access, and ability to join any table on any keys without manual indexing or pre-aggregating enables organizations to query much larger data stores, much faster. We also looked at how high throughput compute devices, like IBM POWER9 and NVIDIA Tesla GPU accelerators, can handle data significantly faster than other data solutions. This ability, coupled with improved I/O through clever software techniques, can not only reduce latency for the AI/ML/DL pipeline, but also operate at much larger scales. + + + + +During the presentation, I introduced [SQream DB for IBM POWER9](https://www.prnewswire.com/news-releases/gpu-accelerated-data-warehouse-sqream-db-boosts-query-performance-by-up-to-150-for-ibm-power9-users-826338125.html). The powerful trinity of SQream DB, IBM POWER9 and NVIDIA GPUs results in advanced performance, with SQL query performance improvements of up to 150% versus GPU-equipped x86 servers, with reduced cost and systems complexity. The solution provides users with a super-charged combination of IBM’s powerful processor and SQream’s DB data warehouse for unparalleled speed, performance and scale, while enabling significantly improved analytics. + + + + +OpenPOWER Summit Europe is a very high quality event, and I highly recommend it for companies already on POWER9 as well as those considering the move. + +\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ + +_David Leichner is the CMO at SQream Technologies. David started his career as a database and network programmer at leading US corporations Salomon Brothers and TRW. Since moving to the vendor world, David has held technical and executive management positions at Information Builders, Magic Software and BluePhoenix. He has been involved in bringing solutions to market on IBM infrastructure since the mid-90s. At SQream, David is responsible for creating and executing the strategy that forms the foundation for SQream’s global market penetration._ diff --git a/content/blog/students-ai4good-openpower-summit-europe.md b/content/blog/students-ai4good-openpower-summit-europe.md new file mode 100644 index 0000000..50b69a2 --- /dev/null +++ b/content/blog/students-ai4good-openpower-summit-europe.md @@ -0,0 +1,30 @@ +--- +title: "Students Participate in the AI4Good Hackathon at OpenPOWER Summit Europe 2018" +date: "2018-10-22" +categories: + - "blogs" +tags: + - "featured" +--- + +By: Ganesan Narayanasamy, OpenPOWER leader in Education and Research, IBM Systems + +[![AI4Good Hackathon at OpenPOWER Summit Europe 2018](images/AI4Good-1024x768.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/10/AI4Good.jpg) + +OpenPOWER Summit Europe 2018, held earlier this month in Amsterdam, was an excellent opportunity for attendees to learn, collaborate and practice their skills. + +During the Summit, we hosted an AI4Good Hackathon. AI4Good empowers participants to use their coding skills to help others. In our case, teams competed to build predictive machine learning and deep learning models to help detect the risk of lung tumors. + +Nine students from the New Horizon College of Engineering in India participated in the hackathon and placed second in the competition. They each shared feedback on their experience: + +- [Anirudh Pachangam](https://www.linkedin.com/feed/update/urn:li:activity:6455530458062131200/): “The OpenPOWER Summit conducted at the RAI, Amsterdam was an amazing convention about artificial intelligence.” +- [Mithun Venkat](https://www.linkedin.com/feed/update/urn:li:activity:6455899896636588032/): “This Summit gave me a lot of opportunities to explore the domains of machine learning and artificial intelligence.” +- [Sanjana Ranjan](https://www.linkedin.com/feed/update/urn:li:activity:6456557170254237696/): “It was fascinating to see various different companies with different backgrounds use AI and Deep Learning in order to help them ease their work and make it more efficient.” +- [Shashaank KP](https://www.linkedin.com/feed/update/urn:li:activity:6455477724021645312/): “There were many startups / companies which had come up with great ideas.” +- [Chandan Kumar V T](https://www.linkedin.com/feed/update/urn:li:activity:6455480040804188160/): “We got to participate in the AI4Good hackathon where the challenge was to detect tumour cells or locations in the lung based on the image of the MRI scan.” +- [Denzel George](https://www.linkedin.com/feed/update/urn:li:activity:6455425371075645440/): “It presented an opportunity for me and my colleagues to learn more about the leading development in the field of Deep Learning, AI and many other fields.” +- [Nikhil Riyaz](https://www.linkedin.com/feed/update/urn:li:activity:6456183002593525761/): “We used TensorFlow to train a segmentation model based on documentation available on GitHub and Tensorflow.org and achieved a commendable accuracy.” +- [SHUBHA A](https://www.linkedin.com/feed/update/urn:li:activity:6455881864828764161/): “I was surprised to see how data plays a crucial role in training a machine and the importance of the data collection. The Summit made me believe that AI is the booming future technology.” +- [Bhavana Savanth](https://www.linkedin.com/feed/update/urn:li:activity:6456556435831611392/): “The Summit had a surge of ideas that catalyzed our understanding of these domains and also included the presentation on the world’s most beautiful super computer, MareNostrum 4 in Spain.” + +Congratulations to all students who attended OpenPOWER Summit Europe and participated in the AI4Good Hackathon! diff --git a/content/blog/supermicro-ibm-extend-strategic-relationship-deliver-choice-flexibility-next-generation-cloud-datacenter.md b/content/blog/supermicro-ibm-extend-strategic-relationship-deliver-choice-flexibility-next-generation-cloud-datacenter.md new file mode 100644 index 0000000..97a426c --- /dev/null +++ b/content/blog/supermicro-ibm-extend-strategic-relationship-deliver-choice-flexibility-next-generation-cloud-datacenter.md @@ -0,0 +1,27 @@ +--- +title: "Supermicro and IBM Extend Strategic Relationship to Deliver Choice and Flexibility for the Next Generation Cloud Datacenter" +date: "2016-11-15" +categories: + - "press-releases" + - "blogs" +--- + +SALT LAKE CITY, UT, SC16, November 15, 2016 ... Today Supermicro (NASDAQ: SMCI) and IBM (NYSE: IBM) have extended their multi-faceted long-term technology relationship. Supermicro has joined the OpenPOWER Foundation, an open development community that leverages the IBM POWER architecture. + +Supermicro is joining IBM, Google, NVIDIA, Canonical, Rackspace and more than 270 other leading technologists working collaboratively on advanced server, storage, and networking acceleration technology as well as industry leading open source software aimed at delivering more choice, control, and flexibility to developers of next-generation heterogeneous cloud datacenters. The group makes POWER-based hardware and software available to open development, along with making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform. + +Supermicro is working with IBM on a variety of fronts including IBM Cloud, IBM Storage and now IBM POWER.  Supermicro will leverage its strength in workload optimized system designs to deliver best in class IBM Power Systems for IBM and the OpenPOWER ecosystem. + +“Supermicro is extending its strong IBM relationship by working with IBM on OpenPOWER systems that deliver maximum performance and efficiency” said Wally Liaw, co-founder and senior vice president of international sales at Supermicro. “OpenPOWER customers can benefit from Supermicro’s industry leading workload optimized system design in combination with IBM’s latest POWER processor technology to reach exceptional performance levels.” + +"Supermicro is a strategic IBM collaborator and their membership in the OpenPOWER Foundation will further extend our strong relationship," said Calista Redmond, OpenPOWER Foundation President and IBM Systems Vice President. "Their expertise in server design, development and manufacturing will foster collaboration among members and further support widespread industry innovation." + +**About OpenPOWER** To learn more about OpenPOWER and to view the complete list of current members, go to http://www.openpowerfoundation.org. Use #OpenPOWER to join the conversation. + +**About Super Micro Computer Inc. (NASDAQ: SMCI)** Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market. + +Supermicro, Building Block Solutions and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. All other brands, names and trademarks are the property of their respective owners. + +**Media Contacts:** Kristin Bryson IBM Media Relations [kabryson@us.ibm.com](mailto:kabryson@us.ibm.com) + +Michael Kalodrich Super Micro Computer Inc. [michaelk@supermicro.com](mailto:michaelk@supermicro.com) diff --git a/content/blog/supervessel-come-create-the-era-of-heterogeneous-computing-with-fpgas-for-the-cloud.md b/content/blog/supervessel-come-create-the-era-of-heterogeneous-computing-with-fpgas-for-the-cloud.md new file mode 100644 index 0000000..2d84611 --- /dev/null +++ b/content/blog/supervessel-come-create-the-era-of-heterogeneous-computing-with-fpgas-for-the-cloud.md @@ -0,0 +1,28 @@ +--- +title: "SuperVessel: Come Create the Era of Heterogeneous Computing with FPGAs for the Cloud" +date: "2015-06-18" +categories: + - "blogs" +tags: + - "featured" +--- + +![Michael_Leventhal](images/Michael_Leventhal-e1434650882306-300x300.jpg)By: Michael Leventhal, Technical Manager, Data Center Acceleration, Xilinx + +Xilinx believes that we, in collaboration with the OpenPOWER Foundation, are spearheading a new era of computing, one that is capable of expanding human potential. In a word, it is the cloud -- which delivers compute capacity and data intelligence to the fingertips of billions. Big data and raw compute capacity have enabled the creation of computing solutions that are categorically different than anything ever done. + +These developments have paved the way for systems that can understand speech, translate languages, recognize individuals, interpret actions in video streams and even autonomously drive cars. However, based on the 75-year old Von Neumann architecture, compute infrastructure alone cannot handle this work within reasonable limits of cost, space, power, and system complexity. In a very short time, this venerable architecture will be unable to meet the taxing demands of the cloud. + +Soon, heterogeneous computing will be the new paradigm equipped to handle these complex workloads. Heterogeneous computing combines CPUs and compute engines with innovative architectures, which will be considerably more efficient for new era cloud workloads. Now, thanks to the IBM POWER Coherent Accelerator Processor Interface (CAPI), Xilinx FPGAs are dynamically hardware-configured to efficiently run cloud applications with an IBM POWER processor and share coherent access to host memory between the processor and the FPGA. + +The Von Neumann architecture has been refined over decades and the compute applications that run on it are highly optimized to run efficiently. Heterogeneous computing has been developing rapidly over the last decade, but there is still a great amount of research and development needed before reaching its full potential. This is one of the most critical areas of computing research today. + +Xilinx is committed to supporting this research and development to help redefine the future of heterogeneous computing. That’s why we joined forces with IBM to design POWER processors with FPGAs attached and enable researchers, students, and developers in the community to leverage OpenPOWER and Xilinx development tools through SuperVessel. This open access cloud service, which was created by IBM Research Beijing and IBM Systems Labs, is now provisioned with CAPI-compatible FPGAs, providing a complete virtual R&D engine for the creation and testing of cloud applications in areas such as deep analytics, machine learning and IoT. + +To further educate developers, Xilinx collaborated with several universities to organize the first international workshop on High Performance Heterogeneous Reconfigurable Computing (H²RC) at SC15 ([http://h2rc.cse.sc.edu/](http://h2rc.cse.sc.edu/)). This will mark the first time an FPGA-focused workshop aimed at the heterogeneous computing community will be held at the supercomputing conference. We'd like to invite you to take advantage of the resources available through SuperVessel and share your experience with the community at H²RC. + +See you in the Cloud and see you at SC15! + +_About Michael Leventhal,_ _Technical Manager, Data Center Acceleration, Xilinx_ + +_Michael is responsible for leadership in the compute acceleration sector of Xilinx’s data center business unit. He has more than a decade of experience in co-processing engines and acceleration with reconfigurable logic, software, and design tools in a wide range of application domains as an inventor, technologist, product manager, and marketer.  He holds a BS-EECS degree from U.C. Berkeley._ diff --git a/content/blog/supervessel-openpower-rd-cloud-with-operation-and-practice-experience-sharing.md b/content/blog/supervessel-openpower-rd-cloud-with-operation-and-practice-experience-sharing.md new file mode 100644 index 0000000..aecaed4 --- /dev/null +++ b/content/blog/supervessel-openpower-rd-cloud-with-operation-and-practice-experience-sharing.md @@ -0,0 +1,32 @@ +--- +title: "SuperVessel -- OpenPOWER R&D cloud with operation and practice experience sharing" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Abstract + +SuperVessel cloud (www.ptopenlab.com) is the cloud platform built on top of POWER/OpenPOWER architecture technologies. It aims to provide the open remote access for all the ecosystem developers and university students. We (IBM Research China, IBM System Technology Lab in China and partners) have built and launched this cloud for more than 3 months, and rapidly attracted the public users from more than 30 universities, including those from GCG and the United States. + +The cloud was built on OpenStack and enabled + +- The latest infrastructure as services, including PowerKVM, containers and docker services with big endian and little endian options. +- The big data service through the collaboration with IBM big data technology for Hadoop 1.0 and open source technology for Hadoop 2.0 (SPARK service) +- The IoT (Internet-of Things) application platform service which has successfully incubated several projects in Healthcare, smart city etc. areas. +- The Accelerator as service (FPGA virtualization) with the novel marketplace, through the collaboration with Altera. + +In this presentation, we would like to share how we built the cloud IaaS and PaaS with the open technologies on OpenPOWER. We also would share what will be the difference when you built a cloud for POWER vs. x86. The most important is the operational experience sharing (with data) for the cloud services on POWER/OpenPOWER. + +### Objective for the presentation + +1. With our real story on SuperVessel cloud, we want **to give industry the real and strong confidence** that OpenPOWER could be easily used for cloud, mobile and analysis. +2. With our real experience, we want **to tell industry how to build** the cloud and big data services with OpenPOWER +3. To encourage industry ecosystem to also easily build their cloud to attract more and more developers for OpenPOWER (it will be very important for OpenPOWER’s success) +4. To encourage our partners and developers, they could leverage SuperVessel to speed up their R&D work on OpenPOWER. **SuperVessel is open for them for use and collaboration.** + +### Speaker Bio + +**Speaker Name:** Yonghua Lin ([linyh@cn.ibm.com](mailto:linyh@cn.ibm.com)), IBM Research China Yonghua Lin is the Senior Technical Staff Member and Senior Manager of Cloud Infrastructure group in IBM Research. She has worked on system architecture research in IBM for 12 years. Her work covered all kinds of IBM multicore processors in the past 10 years, including IBM network processor, IBM Cell processor, PRISM, IBM POWER 6, and POWER 7, etc. She was the initiator of mobile infrastructure on cloud from 2007 which has become the Network Function Virtualization today. She led IBM team built up the FIRST optimized cloud for 4G mobile infrastructures, and successfully demonstrated in ITU, Mobile World Congress, etc. She was the founder of SuperVessel cloud to support OpenPOWER research and development in industry. She herself has more than 40 patents granted worldwide and publications in top conferences and journals. + +[Back to Summit Details](2015-summit/) diff --git a/content/blog/suzhou-powercore-technology-co-intends-to-use-ibm-power-technology-for-chip-design-that-pushes-innovation-in-china.md b/content/blog/suzhou-powercore-technology-co-intends-to-use-ibm-power-technology-for-chip-design-that-pushes-innovation-in-china.md new file mode 100644 index 0000000..e9dac64 --- /dev/null +++ b/content/blog/suzhou-powercore-technology-co-intends-to-use-ibm-power-technology-for-chip-design-that-pushes-innovation-in-china.md @@ -0,0 +1,9 @@ +--- +title: "Suzhou PowerCore Technology Co. Intends To Use IBM POWER Technology For Chip Design That Pushes Innovation In China" +date: "2014-01-19" +categories: + - "press-releases" + - "blogs" +--- + +ARMONK, N.Y. and JIANGSU, China, Jan. 19, 2014 /PRNewswire/ -- IBM \[NYSE: IBM\], the Suzhou PowerCore Technology Company and the Research Institute of Jiangsu Industrial Technology today announced the two Chinese organizations will join the OpenPOWER Foundation, with Suzhou PowerCore intending to use IBM's POWER architecture to provide customized chip design solutions to push server innovation in such areas as Big Data, cloud computing and next generation data centers. diff --git a/content/blog/system-management-tool-for-openpower.md b/content/blog/system-management-tool-for-openpower.md new file mode 100644 index 0000000..58ec715 --- /dev/null +++ b/content/blog/system-management-tool-for-openpower.md @@ -0,0 +1,46 @@ +--- +title: "System Management Tool for OpenPOWER" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Introduction to Authors + +Song Yu: Male, IBM STG China, Development Manager Li Guang Cheng: Male, IBM STG China, xCAT Senior Architect Mao Qiu Yin: Male, Teamsun, Director Hu Hai Chen: Male, Teamsun, Development Manager Ma Yuan Liang: Male, Teamsun, System Department Manager Chen Qing Hong: Male, Teamsun, Architect + +### Background + +OpenPOWER is a new generation platform. As a new system, the infrastructure level management is the most important requirement while the OpenPOWER machines are widely used in cloud area and non-cloud area. + +### In cloud area + +The end user normally cares about the SaaS or PaaS but for the cloud admin, they must consider how to manage the OpenPOWER physical node to provide service. Quickly and automatically provision physical machines and adding physical nodes into Cloud to provide service are very important and basic requirements for cloud center. + +At the same time, if the Cloud provider support HPC related service, they need consider provide physical compute resource to end user but not the virtual resource. How to self-service for physical node is a new challenge in public cloud. + +In non-cloud area: A light-weight system management tool for OpenPOWER is also required. How to control the HW and how to integrate with existing Power or x86 cluster smoothly are the major challenges for the OpenPOWER system management tool. + +### Demonstrated Features + +1. HW Control – Remote power, remote console, hardware inventory, hardware vitals, energy management and so on +2. Automatically Discovery – Automatically discover new OpenPOWER HW and add into management system +3. Provisioning – Unattended OS and application deployment onto the OpenPOWER node +4. Image Management – Clone image,generate image including applications from scratch +5. KVM management – Provision KVM hypervisor and manage the VM lifecycle +6. Docker management – Provision Docker on OpenPOWER node and manage the container lifecycle +7. Multitenancy – Support user, role, tenant and policy management. Work with Keystone for the authentication management and integrate with OpenStack + +### Our experience + +We will leverage xCAT as the backend and Horizon as the frontend of the OpenPOWER management tool. xCAT has supported OpenPOWER node management and enabled the Docker on OpenPOWER system. + +Benefit: The OpenPOWER management tool is based on open source products. It can easily manage the OpenPOWER node and the OpenPOWER vendors can easily add their special HW and FW control functions into the tool as value-add. The whole solution also demonstrates a complete story that how we enable OpenPOWER nodes in a private or public cloud. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Cheng-Li-Guang_OPFS2015_IBM-NCO_031415_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/targeting-gpus-using-openmp-directives.md b/content/blog/targeting-gpus-using-openmp-directives.md new file mode 100644 index 0000000..4ff27b9 --- /dev/null +++ b/content/blog/targeting-gpus-using-openmp-directives.md @@ -0,0 +1,34 @@ +--- +title: "Targeting GPUs using OpenMP Directives on Summit with GenASiS" +date: "2018-12-18" +categories: + - "blogs" +tags: + - "featured" +--- + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM + +In the lead up to SC18 we held the [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/). It was a perfect opportunity for members of academia working in supercomputing to share recent successes they have had developing on OpenPOWER platforms. + +One such session was led by [Reuben Budiardja](https://www.olcf.ornl.gov/directory/staff-member/reuben-budiardja/), a computational scientist in the National Center for Computational Sciences at Oak Ridge National Laboratory. He is the lead developer of GenASiS, the General Astrophysics Simulation System, which has been used to study the role of fluid instabilities in supernova dynamics. GenASiS is written entirely in modern Fortran and, until recently, was CPU-only code. + +Budiardja and his colleague [Christian Cardall](https://www.ornl.gov/staff-profile/christian-y-cardall) identified three potential paths that could be explored to transition to GPUs: + +- CUDA – would require a rewrite of all computational kernels, a loss of Fortran semantics and interfacing with the rest of the Fortran code. +- CUDA Fortran – would be a non-standard extension to Fortran and would not easily fall back to standard Fortran. +- Directives (OpenMP) – would allow retention of Fortran semantics, and OpenMP 4.5 has excellent support for modern Fortran. + +Using OpenMP Directives on [Summit](https://www.olcf.ornl.gov/summit/), [the most powerful supercomputer in the world](https://www.top500.org/news/us-regains-top500-crown-with-summit-supercomputer-sierra-grabs-number-three-spot/), produced strong results. In testing the 3D scaling of the [RiemannProblem](https://en.wikipedia.org/wiki/Riemann_problem), **the team realized a speed-up from 3.92X – 6.71X from 7 CPU to GPU**. + +Pinned Memory was then used to take these results even further. While there is not yet a mechanism by which to use Pinned Memory in OpenMP, the team added a Fortran wrapper in GenASiS to optimize data transfers. Doing so provided an **additional speed-up of 1.7X – 2.0X**, providing an **overall speed-up of over 9X from 7 CPU threads**. + +Budiardga concluded that OpenMP allows simple and effective porting of Fortran code to target GPUs, and this work has many implications. It will enable the team to perform higher-fidelity simulations and ensemble studies for trends in observables. In fact, the team is planning to perform ~200 2D grey transport supernova simulations, tens of 3D grey transport, and a handful of 3D spectral transport simulations. Moreover, this is the first step towards full [Boltzmann radiation transport](https://en.wikipedia.org/wiki/Boltzmann_equation) with exascale computing. + +View Mr. Budiardga’s full session video and slides below. + + + + + +**[Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and Effective Fortran Experience](//www.slideshare.net/ganesannarayanasamy/targeting-gpus-using-openmp-directives-on-summit-with-genasis-a-simple-and-effective-fortran-experience "Targeting GPUs using OpenMP Directives on Summit with GenASiS: A Simple and Effective Fortran Experience")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)** diff --git a/content/blog/tau-performance-system-openpower-summit.md b/content/blog/tau-performance-system-openpower-summit.md new file mode 100644 index 0000000..1248f75 --- /dev/null +++ b/content/blog/tau-performance-system-openpower-summit.md @@ -0,0 +1,20 @@ +--- +title: "TAU Performance System Showcased at OpenPOWER Summit Europe 2018" +date: "2018-10-24" +categories: + - "blogs" +tags: + - "featured" +--- + +By: Sameer Shende, director, Performance Research Laboratory, University of Oregon + +At the University of Oregon, we’ve been looking at the problem of performance engineering of complex PowerAI applications. The [TAU Performance System](http://tau.uoregon.edu)® is a performance profiling and tracing toolkit that we developed, and it has been successfully applied to evaluate the performance of PowerAI components. + +I recently had the chance to share early results in my session “TAU for Accelerating AI Applications” at the [OpenPOWER Summit Europe 2018](https://openpowerfoundation.org/summit-2018-10-eu/). + + + +**[TAU for Accelerating AI Applications at OpenPOWER Summit Europe](//www.slideshare.net/OpenPOWERorg/tau-for-accelerating-ai-applications-at-openpower-summit-europe "TAU for Accelerating AI Applications at OpenPOWER Summit Europe ")** from **[OpenPOWERorg](https://www.slideshare.net/OpenPOWERorg)** + +At OpenPOWER Summit Europe, researchers presented state-of-the-art approaches to running AI workloads on OpenPOWER systems. PowerAI is a powerful software stack. And, running larger datasets on it with TAU will enhance our understanding of the complex inner workings of the interplay between Power9 CPUs and NVIDIA GPUs. diff --git a/content/blog/teuto-net-uses-ubuntu-to-bring-openpower-based-systems-to-the-public-cloud.md b/content/blog/teuto-net-uses-ubuntu-to-bring-openpower-based-systems-to-the-public-cloud.md new file mode 100644 index 0000000..5060dd5 --- /dev/null +++ b/content/blog/teuto-net-uses-ubuntu-to-bring-openpower-based-systems-to-the-public-cloud.md @@ -0,0 +1,28 @@ +--- +title: "Teuto.net Uses Ubuntu to Bring OpenPOWER-based Systems to the Public Cloud" +date: "2015-07-22" +categories: + - "blogs" +tags: + - "featured" +--- + +By Randall Ross, Ubuntu + +Recently, the German IT company [teuto.net](https://insights.ubuntu.com/2015/06/09/teuto-net-uses-ubuntu-to-bring-ibm-power8-to-the-public-cloud/), which specializes in providing hosting, cloud and web development services based on open source technologies announced they are adding more power (excuse the pun) to their OpenStack public cloud service, teutoStack Public Cloud, which had previously been built exclusively on proprietary hardware. As a long-term Ubuntu Cloud Partner, and Ubuntu Advantage Reseller, teuto.net was delighted when Canonical expanded their platform to support OpenPOWER-based POWER8 systems. + +By working with OpenPOWER-based technology, fueled by collaborative innovation, teutoStack Public Cloud can deliver on growing expectations in the highly competitive cloud market. It now brings new capabilities within the reach of more companies as OpenPOWER price/performance advantage lowers the barrier for compute intensive workloads, such as analytics. + +The combination of Ubuntu, Juju, and MAAS as key components in this new OpenPOWER-based public cloud offering is exciting, as it provides teuto.net customers with real choice. They can now enjoy much higher levels of performance for analytics and other resource-hungry workloads. They can also experience the benefits of higher node density, which translates to an excellent return on infrastructure spend: a smaller server footprint, lower energy costs and a more environmentally friendly business. Best of all, they can do this without changing how they work with the cloud. OpenPOWER-based technology may be under the hood, but OpenStack is still the interface, and Juju is still the service modeler. + +The combination of Ubuntu with the OpenPOWER platform has also provided impressive reliability to tueto.net. The company can now easily model, provision, build, manage and support its cloud at scale. It has created the ideal platform to support its new range of cloud services, optimized to support capabilities such as analytics, where they are seeing a significant boost in memory performance. + +Based on the positive response from clients, tueto.net is planning to integrate more OpenPOWER-based POWER8 servers into the teutoStack Public Cloud and eventually migrate additional OpenStack core services to POWER8 for higher performance. Customers like GRAU DATA AG, a data storage company, are already using the teutoStack Public Cloud for testing and delivering their own applications on the OpenPOWER platform with higher performance. It is refreshing to see more and more OpenPOWER solutions coming to market every day, and all the hard work of the OpenPOWER Foundation members, including Canonical, paying off for companies like teuto.net and their customers. Ubuntu has always been focused on giving people choice and access to the best technology. Now, with OpenPOWER, we have a new and exciting way to do that. + +* * * + +  + +_![randall.002](images/randall.002-150x150.png)About Randall Ross_ + +_Randall Ross is an Ubuntu Community Manager with Canonical. He is passionate about all things POWER and works to help grow the community that wants to make Ubuntu and OpenPOWER based solutions that have big impact. Randall leads the OpenPOWER Foundation's Integrated Solutions workgroup. Prior to joining Canonical, Randall has enjoyed over 20 years working in various IT management and consulting roles to ensure that technology solutions match business needs. He has also built and manages one of the largest Ubuntu face-to-face communities in his home city of Vancouver, Canada._ diff --git a/content/blog/the-disruptive-technology-of-openpower.md b/content/blog/the-disruptive-technology-of-openpower.md new file mode 100644 index 0000000..7a3393d --- /dev/null +++ b/content/blog/the-disruptive-technology-of-openpower.md @@ -0,0 +1,14 @@ +--- +title: "The Disruptive Technology of OpenPOWER" +date: "2015-01-16" +categories: + - "blogs" +--- + +The OpenPOWER Foundation is certainly carrying some strong momentum as it enters its second year. As we look forward there are many things still to be done to take the next step on our journey towards creating a broadly adopted, innovative and open platform for our industry. I will share my Top Ten List of OpenPOWER Projects to Disrupt the Data Center. Anything and everything is fair game on this list across the all disciplines, technologies and markets. Come join us in a fun look at how the OpenPOWER Foundation will continue to shake up the Data Center. + +### Speaker + +[Dr. Bradley McCredie](https://www.linkedin.com/profile/view?id=16651393&authType=NAME_SEARCH&authToken=h87g&locale=en_US&srchid=32272301421437407216&srchindex=1&srchtotal=1&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421437407216%2CVSRPtargetId%3A16651393%2CVSRPcmpt%3Aprimary) is an IBM Fellow, Vice President of IBM Power Systems Development and President of the OpenPOWER Foundation. Brad first joined IBM focusing on packaging for IBM’s mainframe systems. He later took a position within the IBM Power Systems development organization and has since worked in a variety of development and executive roles for POWER-based systems. In his current role, he oversees the development and delivery of IBM Power Systems that incorporate the latest technology advancements to support clients' changing business needs. + +[Back to Summit Details](2015-summit/) diff --git a/content/blog/the-future-of-interconnect-with-openpower.md b/content/blog/the-future-of-interconnect-with-openpower.md new file mode 100644 index 0000000..7899816 --- /dev/null +++ b/content/blog/the-future-of-interconnect-with-openpower.md @@ -0,0 +1,26 @@ +--- +title: "The Future of Interconnect with OpenPOWER" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Abstract + +Mellanox Technologies is a founding member of the OpenPOWER Foundation and is also the foundation for scalable and performance demanding computing infrastructures. Delivering 100Gb/s throughput, sub 700ns application to application latency and message rates of 150 million messages per second, Mellanox is recognized as the world leading interconnect solution provider. Along with proven performance, scalability, application offloads and management capabilities, Mellanox EDR 100G solutions were selected by the DOE for CORAL (Collaboration of Oak Ridge, Argonne and Lawrence Livermore National Labs), a project launched to meet the US Department of Energy’s (DOE) 2017-2018 leadership goals of competitiveness in science and ensures US economic and national security. + +Mellanox ConnectX-4 EDR 100Gb/s technology was introduced in November at the SC'14 conference in New Orleans, LA. ConnectX-4 EDR 100Gb/s with CAPI support tightly integrates with the POWER CPU at the local bus level and provides faster access between the POWER CPU and the network device. We will discuss the latest interconnect advancements that maximize application performance and scalability on OpenPOWER architecture, including enhanced flexible connectivity with the latest Mellanox ConnectX-3 Pro Programmable Network Adapter. The new programmable adapter provides maximum flexibility for users to bring their own customized applications such as IPSEC encryption, enhanced flow steering and Network Address Translation (NAT), data inspection, data compression and others. + +### Speaker Bio + +**Speaker:** [Scot Schultz](https://www.linkedin.com/profile/view?id=6563260&authType=NAME_SEARCH&authToken=3hwb&locale=en_US&srchid=32272301421438181309&srchindex=1&srchtotal=6&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438181309%2CVSRPtargetId%3A6563260%2CVSRPcmpt%3Aprimary) **Title:** Director, HPC / Technical Computing + +Scot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies.  Joining Mellanox in early 2013 as Director of HPC and Technical Computing, Schultz is 25-year veteran of the computing industry where prior to joining Mellanox, spent 17 years at AMD in various engineering and leadership roles; including strategic HPC technology ecosystem enablement.   Scot has been instrumental with the growth and development of numerous industry standards-based organizations including OpenPOWER, OpenFabrics Alliance, HPC Advisory Council and many others. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Schultz_OPFS2015_Mellanox_030815_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/the-next-peak-in-hpc.md b/content/blog/the-next-peak-in-hpc.md new file mode 100644 index 0000000..847cd1d --- /dev/null +++ b/content/blog/the-next-peak-in-hpc.md @@ -0,0 +1,36 @@ +--- +title: "The Next Peak in HPC" +date: "2015-01-22" +categories: + - "blogs" +--- + +National Center for Computational Sciences Oak Ridge National Laboratory Oak Ridge, TN, USA + +### Abstract + +Hybrid CPU+GPU architectures are a response to power limitations imposed by the end in the last decade of processor clock-rate scaling. This limitation continues to drive supercomputer architecture designs toward massively parallel, hierarchical, and/or hybrid systems, and we expect that, for the foreseeable future, large leadership computing systems will continue this trajectory in order to address science and engineering challenges for government, academia, and industry. Consistent with this trend, the U.S. Department of Energy’s (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to the Oak Ridge National Laboratory (ORNL) in 2017. This new supercomputer, named Summit, will provide on science applications at least five times the performance of Titan, the OLCF’s current hybrid CPU+GPU leadership system, and become the next peak in leadership-class computing systems for open science. To deliver this new capability, IBM has formed a partnership with NVIDIA and Mellanox, all members of the OpenPOWER Foundation, and each will provide system components for Summit. In addition, OLCF will partner with eight application software development teams to jointly prepare their science applications for the Summit architecture, and carry out early science campaign to demonstrate the Summit’s new capabilities for science. These application-readiness partnerships, with support from the IBM/NVIDIA Center of Excellence at Oak Ridge, will exercise Summit’s programming models and harden its software tools. In order to meet DOE’s broad science and energy missions, DOE procurements continue to support diversity in architectures. And in this context, more mature programming environments, enabling performance portable software engineering, become a requirement for DOE supercomputing facilities. To prepare mission-critical scientific applications now and for the next generation systems, our center continues to advance open-standards and work closely with ecosystem partners to address needs of our users. These efforts will be outlined in this talk. + +### Presenters + +Tjerk Straatsma Jim Rogers Adam Simpson Ashley Barker Fernanda Foertter Jack Wells + +### Speaker Bio + +**Jack C. Wells, Ph.D.** Director of Science National Center for Computational Science Oak Ridge National Laboratory + +Jack Wells is the Director of Science for the National Center for Computational Sciences (NCCS) at Oak Ridge National Laboratory (ORNL). He is responsible for devising the strategy to ensure cost-effective, state-of-the-art scientific computing at the NCCS, which hosts the Department of Energy’s Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science national user facility, and Titan, currently the faster supercomputer in the United States. + +In ORNL’s Computing and Computational Sciences Directorate, Wells has previously lead both the Computational Materials Sciences group in the Computer Science and Mathematics Division and the Nanomaterials Theory Institute in the Center for Nanophase Materials Sciences. During an off-site assignment from 2006 to 2008, he served as a legislative fellow for U.S. Senator Lamar Alexander of Tennessee, providing information about _high-performance computing, energy technology, and science, technology, engineering, and mathematics education policy issues_. + +Wells began his ORNL career in 1990 for resident research on his Ph.D. in Physics from Vanderbilt University.  Following a three-year postdoctoral fellowship at the Harvard-Smithsonian Center for Astrophysics, he returned to ORNL as a staff scientist in 1997 as a Wigner fellow.  Jack is an accomplished practitioner of computational physics and has been sponsored in his research by the Department of Energy’s Office of Basic Energy Sciences. + +Jack has authored or co-authored over 70 scientific papers and edited 1 book, spanning nanoscience, materials science and engineering, nuclear and atomic physics computational science, and applied mathematics. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/WellsJack_OPFS2015_ORNL.031815.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/the-next-step-in-the-openpower-foundation-journey.md b/content/blog/the-next-step-in-the-openpower-foundation-journey.md new file mode 100644 index 0000000..29bb2d0 --- /dev/null +++ b/content/blog/the-next-step-in-the-openpower-foundation-journey.md @@ -0,0 +1,97 @@ +--- +title: "The Next Step in the OpenPOWER Foundation Journey" +date: "2019-08-20" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "google" + - "openpower-summit" + - "wistron" + - "openpower-foundation" + - "yadro" + - "raptor" + - "linux-foundation" + - "power-isa" + - "inspur-power-systems" + - "smartdv" + - "tvs" + - "molex" + - "suse" + - "rambus" +--- + +[Hugh Blemings](https://www.linkedin.com/in/hugh-blemings/?originalSubdomain=au), Executive Director, OpenPOWER Foundation + +Today marks one of the most important days in the life of the OpenPOWER Foundation. With IBM announcing new contributions to the open source community including the POWER Instruction Set Architecture (ISA) and key hardware reference designs at [OpenPOWER Summit North America 2019](https://events.linuxfoundation.org/events/openpower-summit-north-america-2019/), the future has never looked brighter for the POWER architecture. + +## **OpenPOWER Foundation Aligns with Linux Foundation** + +The OpenPOWER Foundation will now join projects and organizations like OpenBMC, CHIPS Alliance, OpenHPC and so many others within the [Linux Foundation](https://www.linuxfoundation.org/). The Linux Foundation is the premier open source group, and we’re excited to be working more closely with them. + +Since our founding in 2013, IEEE-ISTO has been our home, and we owe so much to its team. It’s as a result of IEEE-ISTO’s support and guidance that we’ve been able to expand to more than 350 members and that we’re ready to take the next step in our evolution. On behalf of our membership, our board of directors and myself, we place on record our thanks to the IEEE-ISTO team. + +By moving the POWER ISA under an open model - guided by the OpenPOWER Foundation within the Linux Foundation - and making it available to the growing open technical commons, we’ll enable innovation in the open hardware and software space to grow at an accelerated pace. The possibilities for what organizations and individuals will be able to develop on POWER through its mature ISA and software ecosystem will be nearly limitless. + +https://www.youtube.com/watch?v=v95CNTCCim0 + +  + +## **The Impact of an Open POWER ISA and Open Source Designs** + +We’ve heard in the past that developing a full featured core like POWER can be complicated – but that’s not necessarily the case. In fact, we believe the open source community can leverage the contributions made by IBM rather quickly. + +In addition to open sourcing the POWER ISA, IBM is also contributing a newly developed softcore to the community. In a very short time, an IBM engineer was able to develop a softcore on the POWER ISA, and get it up and running on a Xilinx FPGA. This softcore implementation is being demonstrated this week at OpenPOWER Summit North America. + +“This is the first tangible outcome of the opening of the POWER ISA,” said Mendy Furmanek, President, OpenPOWER Foundation and Director, OpenPOWER Processor Enablement, IBM. “It’s an example of the type of innovation that can be brought forward by the community as a result of newly open-sourced contributions.” + +IBM is also contributing reference designs for OpenCAPI and Open Memory Interface (OMI) to the open source community. OpenPOWER Foundation and OpenCAPI Consortium member Microchip Technology was recently awarded [Best Of Show at Flash Memory Summit 2019](https://www.businesswire.com/news/home/20190807005938/en/Flash-Memory-Summit-Announces-2019-Show-Award) for its newly announced serial memory controller, which leverages interface designs IBM is contributing. + +While these designs are architecture-agnostic, they will help to grow and advance the OpenPOWER ecosystem. OpenPOWER Foundation partners including IBM and Google share their perspectives on the news in [Microchip's announcement of the SMC 1000 8x25G.](https://investor.microsemi.com/2019-08-05-Microchip-Enters-Memory-Infrastructure-Market-with-Serial-Memory-Controller-for-High-performance-Data-Center-Computing) + +## **Excitement From the OpenPOWER Ecosystem** + +We’ve already heard incredibly positive feedback on today's announcements from a number of our partners: + +- As the Chairman of the OpenPOWER Foundation Board of Directors, it’s an honor to share in such a tremendous moment with our community. The opening of the POWER ISA and alignment of the OpenPOWER Foundation with the Linux Foundation is a reflection of our mature, sustainable and growing ecosystem. These changes will result in more consortia-driven initiatives and allow more diverse, innovative products and solutions to be brought to market. – Artem Ikoev, Chairman of the OpenPOWER Foundation Board of Directors, Co-founder and CTO, [Yadro](https://yadro.com/) + +  + +- "Inspur Power Systems has a rich portfolio based on both Power and OpenPOWER that is realizing growth in the China market. Recently, Inspur Power Systems has developed and announced industry-leading OpenPOWER products in storage, datacenter, AI and big data. We receive positive feedback from our customers citing TCO and performance advantages as well as value in the openness of OpenPOWER technology and software stack. Inspur Power Systems is very excited about the possibilities today’s announcements offer to the OpenPOWER ecosystem, our company and of course our clients. We congratulate IBM and the OpenPOWER Foundation for showing leadership in taking these steps." - John Hu, General Manager, [Inspur Power Systems](https://www.inspursystems.com/) + +  + +- "At Raptor Computing Systems our top priority has always been owner controlled, auditable systems. The release of the POWER ISA is key to making POWER the definitive go-to architecture not only for security-sensitive applications, but for any application that is intended to last. With this single, vital step, Raptor Computing Systems can now offer truly high performance systems with absolutely no compromises on user freedom. Make no mistake, this is a milestone for the industry -- computing as it should have been, and can be again, thanks to IBM's willingness to embrace open systems and Raptor Computing Systems' commitment to owner control.” - Timothy Pearson, CTO, [Raptor Computing Systems](https://www.raptorcs.com/) + +  + +- “At the University of Oregon, we are committed to supporting the OpenPOWER platform and developing tools that help improve the quality and efficiency of the software developed on this platform. The tools developed at the University of Oregon include the TAU Performance System(R), in use at supercomputing sites around the world, for evaluating the performance of HPC and AI workloads. The release of the POWER ISA is an important milestone in developing the software ecosystem on the OpenPOWER platform.” Sameer Shende, Director, Performance Research Laboratory, [University of Oregon](https://www.uoregon.edu/) + +  + +- “Wistron Enterprise Business Group has enjoyed a long and productive relationship with the OpenPOWER Ecosystem and was one of the first members of the OpenPOWER Foundation when we joined in 2014. We understand the many benefits of Open—solutions built on our POWER9  "MiHawk" server has a great combination of high performance and ability to run an entirely open software stack, from firmware to applications. By leveraging POWER9s impressive memory capability, PCIe Gen4, and OpenCAPI technologies this server excels in AI, cloud, and BigData. We're excited to see IBM and the OpenPOWER Foundation take the ecosystem to the next level of openness with the announcements today and are already considering how we can best leverage this for the benefit of our customers. ” - William Lin, President, Enterprise Business Group, [Wistron](https://www.wistron.com/CMS/ChangeLanguage/3) + +  + +- "Rambus joined the OpenPOWER Foundation in November of 2016 and has been actively developing a research platform for [hybrid memory systems](https://www.rambus.com/rambus-to-develop-hybrid-memory-system-architectures/). As advocates for open hardware standards we're pleased to see the POWER ISA opened up, a positive step for the overall ecosystem and industry." - Gary Bronner, Senior Vice President of [Rambus Labs](https://www.rambus.com/) + +  + +- “SUSE has been part of the POWER/OpenPOWER story from the start, with SUSE Linux Enterprise Server being one of the first commercially supported distributions on the architecture. As a long-time participant in open technical communities, software and more recently hardware, it's great to see IBM and the OpenPOWER Foundation continuing their drive toward a truly open hardware and software stack that's enterprise-ready. We look forward to the next generation of systems resulting from these ongoing efforts.” - Alan Clark, Director of Industry Initiatives, Emerging Standards and Open Source, [SUSE](https://www.suse.com/index-b/) + +  + +- “We are delighted to see that OpenPOWER is continuing to forge ahead with opening up every aspect of its computing architecture. This is allowing true innovation from experts across the entire ecosystem toward a rapid product development cycle that our industry desperately needs as we shift to Heterogeneous Distributed Computing architectures. In particular Molex & BittWare are looking forward to potentially leveraging the new OMI, Open Memory Interface, IP and DDIMMs in our future FPGA accelerators.” - Allan Cantle, CTO, [Molex ISI Group](https://www.molex.com/molex/home) + +  + +- “SmartDV™ Technologies, the proven and trusted choice for Verification Intellectual Property (IP), is extremely excited to see OpenCAPI moving to an open source IP model. OpenCAPI is an important new development that enables data to move through the system more efficiently in the areas of accelerators, networking and storage, as well as general compute off-load. We at SmartDV believe that having an ecosystem where industry leaders can drive innovation through an open environment is critical for mass adoption and acceptance. And at SmartDV we offer the first commercially available OpenCAPI Bus Functional Model that supports OpenCAPI 3.0 as well as 3.1 to verify OpenCAPI initiatives that includes an extensive test suite that performs random or directed protocol tests to create a range of scenarios to effectively verify the design under test. SmartDV is also offering a synthesizable OpenCAPI transactor for emulation and/or FPGA prototyping as well as a System C version of the OpenCAPI Bus Functional Model.” - Barry Lazow, Vice President, Worldwide Sales and Marketing, [SmartDV Technologies](http://www.smart-dv.com/) + +  + +- "T&VS are a leading global provider of test and verification services that help companies deliver world-class hardware and software products that are reliable, safe and secure. To meet the challenges of today's complex systems, the industry needs to continue to evolve. We are excited about the new open hardware technologies announced today, and the opportunities it brings to us and to the overall ecosystem. As a services company built on a deep understanding of the latest methodologies, we are ready to support the emerging open hardware development landscape." - Dr. Mike Bartley, Founder and CEO of [T&VS](https://www.testandverification.com/) + +  + +Thank you to all of our OpenPOWER Foundation members and the open source community for your support at this significant juncture for our Foundation. I’d love to hear any feedback you have. Please leave a comment below, send me an email, or consider joining us at our upcoming [OpenPOWER Summit Europe](https://events.linuxfoundation.org/events/openpower-summit-eu-2019/)! diff --git a/content/blog/the-open-secret-behind-the-success-of-openpower.md b/content/blog/the-open-secret-behind-the-success-of-openpower.md new file mode 100644 index 0000000..ad8b757 --- /dev/null +++ b/content/blog/the-open-secret-behind-the-success-of-openpower.md @@ -0,0 +1,24 @@ +--- +title: "The Open Secret Behind the Success of OpenPOWER" +date: "2015-05-07" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Brad McCredie, President_ + +The release this week of Intel’s new 18-core Haswell-EX chip, gives us an opportunity to gauge how OpenPOWER technology and our “co-opetition” business model are stacking up. It’s an open secret. In just little over a year, with more than 10 new collaboratively built hardware solutions and growing, the OpenPOWER Foundation is reimagining the data center and leading our industry into a new era, dominated by hyperscale clouds and analytics on huge datasets. + +When we were founded in 2013, some in the industry were skeptical of our approach – even comparing us to OpenSPARC, a technology that is generally acknowledged to have underperformed. But where OpenSPARC gave away old technology and never really focused on building a strong ecosystem, OpenPOWER shares new technology, has an industry-led ecosystem of more than 100 members, and is built around the first system developed for the most modern, workloads and deployment models. It should also be noted that while OpenSPARC and Intel’s proprietary product line were among the many options back when Moore’s Law appeared to be perpetually sustainable, OpenPOWER is emerging as Moore’s Law approaches its limit, and the industry is eager for alternative choices. As a study by a leading industry analyst put it last year, POWER8, the IBM architecture that is the basic building block of OpenPOWER, “offers a viable alternative to Intel’s market-leading products…and is energizing the OpenPOWER Foundation.” Those who make the point that the OpenPOWER approach has been tried before are right. But while OpenPOWER has dared to go where others have attempted to go before, it is the first model to get it right. In short, OpenPOWER is the wave of the future, and there’s no turning back. The industry is voting with its feet and its dollars. + +Power Systems lead the global Big Data and analytics market worldwide and are the top choice for scalable systems. Globally, nine of the top 10 banks, and 8 of the top 10 retailers run Power systems. + +OpenPOWER’s success is not due solely to its innovative business model. We have been able to marry business model innovation with technology innovation to deliver choice, freedom and superior performance demanded by clients around the world. So, how do our specs stack up? + +By any measure, POWER8 processors offer more memory, more threads, more bandwidth and more cache than Intel. Built for Big Data, Power Systems offer virtualization without limits and security without doubt. They are optimized to run core, mission-critical applications alongside emerging business applications, and they offer efficient, cost-effective and simple-to-manage clouds. + +Finally, as an independent analyst recently noted, “pricing is no contest,” with Power chips averaging about half the cost of Intel chips. + +While the single company led, closed, proprietary microprocessor model is fighting to maintain its foothold in the industry; it is no longer the only game in town. OpenPOWER is a bold, unprecedented move that is industry led, community driven and gaining momentum. Congratulations to Intel for the introduction of its new chip, but as baseball great, Satchel Paige once famously declared, “Don’t look back, something might be gaining on you.” diff --git a/content/blog/trusted-computing-applied-in-openpower-linux.md b/content/blog/trusted-computing-applied-in-openpower-linux.md new file mode 100644 index 0000000..429256b --- /dev/null +++ b/content/blog/trusted-computing-applied-in-openpower-linux.md @@ -0,0 +1,40 @@ +--- +title: "Trusted Computing Applied in OpenPOWER Linux" +date: "2015-01-17" +categories: + - "blogs" +--- + +### Introduction to Authors + +Mao Qiu Yin: Male, Teamsun, Director Zhiqiang Tian: Male, Teamsun, SW Developer + +### Background + +The computer system security problem is more and more emphasized by the Chinese government and it has created its own security standards. OpenPOWER as a new open platform, it urgently needs to achieve China's trusted computing security standard and provides the prototype system that conforms to the specifications in order to satisfy the demands of the development of OpenPOWER ecosystem in China. + +### Demonstrated Features + +1. Trusted motherboard: As the RTM of the Trusted computing, provides the highest security solution. +2. TPCM card: As a PCIE device, implements TCM and no HW change in system. +3. Support TPCM driver in Power OS. +4. Based on the white list and trusted database to implement Trusted Computing in OS kernel. +5. Implemented trusted chain pass from RTM to application +6. Support TPCM card in open power firmware level to support open power virtualization +7. Apply the open power trusted computer node to China security Cloud system + +### Our experience + +We choose Power Linux as the application OS and it is easy to port the whole trusted computing software stack to other UNIX like OS such as Power AIX. + +### Benefit + +The prototype implementation on the open power system that abides by the security standards of China provides strong support for the comprehensive power system promotion and in the meantime it provides a powerful guarantee for the development of power ecosystem in China high security level market. It enriches the China ISV and IHV’s options range with this total solution from hardware to software. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Qiuyin-Mao_OPFS2015_Teamsun_031015_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/turbulence-simulations-and-fine-grained-asynchronism-for-pseudo-spectral-codes.md b/content/blog/turbulence-simulations-and-fine-grained-asynchronism-for-pseudo-spectral-codes.md new file mode 100644 index 0000000..73bb395 --- /dev/null +++ b/content/blog/turbulence-simulations-and-fine-grained-asynchronism-for-pseudo-spectral-codes.md @@ -0,0 +1,27 @@ +--- +title: "Turbulence Simulations and Fine Grained Asynchronism for Pseudo-Spectral Codes" +date: "2019-04-01" +categories: + - "blogs" +tags: + - "openpower" + - "summit" + - "openpower-foundation" + - "georgia-institute-of-technology" + - "pseudo-spectral-codes" + - "oak-ridge-national-laboratory" +--- + +By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM + +Continuing our series of posts coming out of the [3](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/)[rd](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/) [OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/), Kiran Ravikumar, a PhD student at the Georgia Institute of Technology spoke about fine grained asynchronism for pseudo-spectral codes and how it applies to turbulence. + +Ravikumar discussed how turbulence is found everywhere in nature and engineering and covered the importance of performing huge turbulence simulations to better understand the fundamental physics in turbulent flows under conditions with disorderly fluctuations arising over a wide range of scales. + +To do this effectively, Ravikumar emphasized the value of using [Summit](https://www.olcf.ornl.gov/summit/), [the most powerful supercomputer in the world](https://www.top500.org/news/us-regains-top500-crown-with-summit-supercomputer-sierra-grabs-number-three-spot/). With Summit, his team can benefit from 512 GB host memory per node and 16 GB GPU memory per GPU. This is a huge factor of difference between amount of memory on GPU and amount of memory on CPU. He also covered how Summit will allow for both faster copies and communication. Essentially, they’ll be able to run massive problem sizes with fewer nodes than any other machine. + +Ravikumar detailed the successful development of a highly scalable GPU-accelerated algorithm for turbulence and 3DFFT exploiting unique features of Summit. He also emphasized the CUDA Fortran implementation that increases GPU speed by four at smaller problem sizes, with belief that it will hold up to larger problems using Summit. + +Ravikumar and his team are excited by the potential for performing turbulence simulations at unprecdented resolution on Summit. + +View Ravikumar’s full session [video](https://www.youtube.com/watch?v=_TlyHtqwc_4) and [slides](https://www.slideshare.net/ganesannarayanasamy/fine-grained-asynchronism-for-pseudospectral-codes-with-application-to-turbulence?ref=https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/). diff --git a/content/blog/tyan-launches-its-openpower-customer-reference-system.md b/content/blog/tyan-launches-its-openpower-customer-reference-system.md new file mode 100644 index 0000000..fbbeb5c --- /dev/null +++ b/content/blog/tyan-launches-its-openpower-customer-reference-system.md @@ -0,0 +1,19 @@ +--- +title: "TYAN Launches its OpenPower customer reference system" +date: "2014-10-08" +categories: + - "press-releases" + - "blogs" +--- + +**San Francisco, USA - October 8th, 2014 –**TYAN, an industry-leading server platform design manufacturer, and subsidiary of MITAC Computing Technology Corporation (Mitac Group), launched its OpenPOWER Customer Reference System, the [TYAN GN70-BP010](http://www.tyan.com/campaign/openpower/). It is the first OpenPOWER Reference System and follows the spirit of innovation and collaboration that defines the [OpenPOWER](https://openpowerfoundation.org/) architecture. This Customer Reference System is now available to customers. + +"Open resources, management flexibility, and hardware customization are becoming more important to IT experts across various industries," said Albert Mu, Vice President of MITAC Computing Technology Corporation's TYAN Business Unit. "To meet the emerging needs of evolving IT worlds, TYAN is honored to present its Palmetto System, the [TYAN GN70-BP010](http://www.tyan.com/campaign/openpower/). As the first commercialized customer reference system provided from an official member from the [OpenPOWER](https://openpowerfoundation.org/) ecosystem, the [TYAN GN70-BP010](http://www.tyan.com/campaign/openpower/) is based on the POWER8 Architecture and follows the [OpenPOWER](https://openpowerfoundation.org/) Foundation's design concept." + +The [TYAN GN70-BP010](http://www.tyan.com/campaign/openpower/) is a customer reference system which allows end users to deploy software based on the OpenPOWER architecture tailored to their individual requirements. It is a 2U system that contains (1) IBM® POWER8 Turismo SCM processor (4) 240-pin R-DDR3 1600/1333Mhz w ECC DIMM, (8) 2.5" /3.5" hot-swap HDD and supports multiple PCI-E G3 ports as well as (4) SATA -III 6.0Gb/s ports with (1) CPU & heatsink, (4) 4GB DDR-3 and (1) 500GB 3.5" HDD L10 system bundled. TYAN's OpenPOWER Customer Reference System provides another opportunity for users to run their applications in a way of cost effective and flexible way. It is an innovative and collaborative hardware solution for IT experts who are looking for a more open, flexible, customized, and intelligent IT deployment. + +[TYAN's GN70-BP010](http://www.tyan.com/campaign/openpower/), the OpenPOWER Customer Reference System, will be available at the end of October 2014. TYAN also announced a special promotion of [TYAN GN70-BP010](http://www.tyan.com/campaign/openpower/) Customer Reference System. For more product information, please visit the [TYAN webpage](http://www.tyan.com/campaign/openpower/) or contact [lydia.tsai@mic.com.tw](mailto:lydia.tsai@mic.com.tw) + +\*\* OpenPOWER is registered trademarks of OpenPOWER Foundation in the United States and other countries. IBM® POWER8 or other product/company/brand names mentioned in the above is the registered trademarks of IBM and other entities in the United States and other countries. + +\*\*Campaign webpage: [http://www.tyan.com/campaign/openpower/](http://www.tyan.com/campaign/openpower/) - See more at: http://www.tyan.com/newsroom\_pressroom\_detail.aspx?id=1648#sthash.qtKrcBw2.dpuf diff --git a/content/blog/tyan-openpower-products-and-future-product-plans.md b/content/blog/tyan-openpower-products-and-future-product-plans.md new file mode 100644 index 0000000..0f1986a --- /dev/null +++ b/content/blog/tyan-openpower-products-and-future-product-plans.md @@ -0,0 +1,26 @@ +--- +title: "Tyan OpenPOWER products and future product plans" +date: "2015-01-17" +categories: + - "blogs" +--- + +### Presentation Objectives + +Invited to participate in OpenPOWER Foundation, TYAN developed the OpenPOWER reference board following the spirit of innovation and collaboration that defines the OpenPOWER architecture. In addition, TYAN contribute the associate reference design to the community. In the presentation, TYAN would like to share our value proposition to the community and reveal future product plan and associate milestones to the audiences participating in the first OpenPOWER Summit 2015. + +### Abstract + +Introduce TYAN and brief on what contribution has been made to OpenPOWER community in the past twelve months. TYAN will also share the future product plan and associate milestones to the audiences. + +### Speaker Bio + +Albert Mu is Vice President at _MiTAC Computing Technology Corporation_ and General Manager for Tyan Business Unit. From 2005 to 2008 he was with Intel as General Manager of Global Server Innovation Group (GSIG) with the charter to develop differentiated system platform products for Internet Portal Data Center and Cloud segments. Prior to Intel, Albert Mu was Vice President and General Manager of Network, Storage, and Server Group (NSSG) at Promise Technologies, Inc. and Corporate Vice President and Chief Technology Officer at Wistron Corporation.  Prior to Wistron, he was Vice President of Engineering at Clarent Corporation and worked at CISCO, HaL Computer and MIPS Computer. Mr. Mu received a BSEE degree from National Chiao Tung University, Hsinchu, Taiwan, MSEE degree from the University of Texas, at Austin and MS Engineering Management from Stanford University. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Mu-Albert_OPFS2015_TYAN_030915.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/tyan-showcases-its-first-commercialized-power8-based-system-at-openpower-summit-2015.md b/content/blog/tyan-showcases-its-first-commercialized-power8-based-system-at-openpower-summit-2015.md new file mode 100644 index 0000000..3e1fa5d --- /dev/null +++ b/content/blog/tyan-showcases-its-first-commercialized-power8-based-system-at-openpower-summit-2015.md @@ -0,0 +1,35 @@ +--- +title: "Tyan showcases its first commercialized POWER8 Based System at OpenPOWER Summit 2015" +date: "2015-03-20" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +### **_More Cutting-Edge Platforms to Be Displayed at NVIDIA 2015 GPU Technology Conference_** + +**_TYAN OpenPOWER Product Information: [http://www.tyan.com/solutions/tyan\_openpower\_system.html](http://www.tyan.com/solutions/tyan_openpower_system.html) Related OPF Summit 2015 Information: [https://openpowerfoundation.org/2015-summit/](https://openpowerfoundation.org/2015-summit/) Related GTC 2015 Information: [http://www.gputechconf.com/](http://www.gputechconf.com/)_** + +**San Jose, California, USA - March 19, 2015 –TYAN, an industry-leading server platform design manufacturer and subsidiary of MITAC Computing Technology Corporation, will demonstrate the TYAN TN71-BP012, the first commercialized POWER8 based solution at TYAN’s booth (#1112) during the OpenPOWER Summit 2015. Other cutting-edge platforms will be displayed at its booth (#831) during GTC 2015 as well.** + +The OpenPOWER Summit 2015 takes place March 17-19 at the McEnery Convention Center in San Jose, California. The TYAN TN71-BP012 platform is the first commercialized OpenPOWER-based hardware system and is designed around the concept of innovation and collaboration. As the first commercialized hardware system collaborated by the OpenPOWER community, TYAN’s TN71-BP012 is an POWER8 processor-based solution that reveals the spirit of the OpenPOWER Foundation **–**resources and innovation are notlimited to the community but are open to the world. + +IT experts can adopt TYAN’s TN71-BP012 as well other technical resources shared from the OpenPOWER community to build their customized IT infrastructure and manage it in a flexible way. Visitors can experience TYAN’s extraordinary and innovative POWER8 supported platform and meet TYAN’s team during the OPF summit 2015. + +“TYAN aims to provide reliable, flexible and high-performance hardware platforms that help customers achieve their goals as well as drive their business,” said Albert Mu, Vice President of MITAC Computing Technology Corporation’s TYAN Business Unit. “As a founding member, TYAN collaborates with the technical partners among the OpenPOWER ecosystem in the very beginning to develop TYAN’s POWER8 based solutions. We are honored to present the second generation of TYAN’s POWER8 based solution, the TYAN TN71-BP012 (project name: Habanero). The TYAN TN71-BP012 is based on the POWER 8 Architecture and provides tremendous memory capacity as well as outstanding performance that fits in datacenter, big data or HPC.” + +"TYAN's introduction of the world's first non-IBM branded OpenPOWER commercial server, designed and manufactured outside of IBM, is a significant moment for the OpenPOWER Foundation," said Brad McCredie, President of the OpenPOWER Foundation and IBM Fellow. "This server provides a compelling high-performance, cost-effective hardware alternative for hyperscale data centers around the world." + +The TYAN TN71-BP012 is the first commercialized system which is supported by the open resources shared from the OpenPOWER Foundation and enables end users to deploy software tailored to their individual requirements. It is a 2U system that contains (1) IBM® POWER8 Turismo SCM processor, (32) 240-pin R-DDR3 1600/1333Mhz w ECC DIMM, (14) 2.5” /3.5” hot-swap HDD, (4) HH PCI-E Gen.3 slots, (4) 10GBASE-T w/Mezz Card. With tremendous memory capacity and high-end computing performance, TYAN’s OpenPOWER standards-based system provides another approach for users to run their applications in an innovative, optimized and flexible way. + +All are welcome to visit TYAN’s booth (#1112) to explore the unique TYAN OpenPOWER hardware solution during the OpenPOWER Summit 2015 or learn more about TYAN’s OpenPOWER supported product plan at Mr. Albert Mu’s presentation from 15:40 to 15:50 pm on March., 18 during the summit. TYAN’s TN71-BP012, the first commercialized system built to OpenPOWER standards, will be available in end of Q2’15. For more product information, please visit the TYAN webpage[http://www.tyan.com/solutions/tyan\_openpower\_system.html](http://www.tyan.com/solutions/tyan_openpower_system.html). + +TYAN will also showcase two high performance solutions which are designed for both enterprise and HPC Applications at TYAN booth (#831) during GTC 2015. The TYAN FT76-B7922 is a special 4-way in 4U solution that supports (4) NVIDIA® GPU accelerators while the TYAN FT77C-B7079 supports up to  (8) NVIDIA Tesla® K80 dual-GPU accelerators in a 4U chassis. Attendees are welcomed to explore the solutions and meet TYAN team during the event. + +**About TYAN** + +TYAN, as a leading server brand of Mitac Computing Technology Corporation under the MiTAC Group (TSE:3706), designs, manufactures and markets advanced x86 and x86-64 server/workstation board technology, platforms and server solution products. Its products are sold to OEMs, VARs, System Integrators and Resellers worldwide for a wide range of applications. TYAN enables its customers to be technology leaders by providing scalable, highly-integrated, and reliable products for a wide range of applications such as server appliances and solutions for high-performance computing and server/workstation used in markets such as CAD, DCC, E&P and HPC. + +For more information, visit MiTAC’s website at [http://www.mitac.com](http://www.mitac.com/) or TYAN’s website at [http://www.tyan.com](http://www.tyan.com/). diff --git a/content/blog/tyans-openpower-customer-reference-system-now-available.md b/content/blog/tyans-openpower-customer-reference-system-now-available.md new file mode 100644 index 0000000..206abbc --- /dev/null +++ b/content/blog/tyans-openpower-customer-reference-system-now-available.md @@ -0,0 +1,66 @@ +--- +title: "TYAN’s OpenPOWER Customer Reference System Now Available" +date: "2014-10-08" +categories: + - "blogs" +tags: + - "tyan" +--- + +_Innovative, Collaborative and Open_ + +Open resources, management flexibility, and hardware customization are becoming more important to IT experts across various industries. To meet the emerging needs of evolving IT worlds, TYAN is honored to present its Palmetto System, the TYAN GN70-BP010. As the first commercialized customer reference system provided from an official member from the OpenPOWER ecosystem, the TYAN GN70-BP710 is based POWER 8 Architecture and follows the OpenPOWER Foundation’s design concept. + +The TYAN GN70-BP010 is a customer reference system which allows end users to deploy software based on the OpenPOWER architecture tailored to their individual requirements. It provides another opportunity for users to run their applications in a way of cost effective and flexible way. It is an innovative and collaborative hardware solution for IT experts who are looking for a more open, flexible, customized, and intelligent IT deployment. + +**_TYAN GN70-BP010 Product Feature:_** + +l   Enclosure + +- Industry 19" rack-mountable 2U chassis +- Dimension : D27.56" x W16.93" x H3.43“ (D700 x W430 x H87mm) +- (8) 2.5” /3.5” hot-swap HDD + +l   Power Supply + +- (1+1)770W DPS-770CB B (80-plus gold) + +l   System Cooling + +- **(4)** 6cm hot-swap fans + +l   Motherboard + +- SP010GM2NR , ATX 12” x 9.6” (304.8 x 235.2mm) + +l   Processor + +- **(1)** IBM® Power 8 Turismo SCM processor + +l   Memory + +- (4) 240-pin R-DDR3 1600/1333Mhz w ECC DIMM sockets + +l   Expansion Slots + +- PCI-E Gen3 x16 slot +- (1) PCI-E Gen3 x8 slot + +l   Integrated LAN controllers + +- (2)  GbE ports (via  BMC5718) + +l   Storage + +- (4) SATA -III 6.0Gb/s ports  (via **Marvell 88SE9235**) + +l   Rear I/O + +- (2) GbE RJ45, +- (1) Stacked dual-port  USB 3.0 +- (1) Stacked COM port  and VGA port +- (1) FPIO (reboot/power on button/HDD LED/Power  ON LED) + +l   AST2400  iBMC w/iKVM (IPMI v2.0 compliant) + +For more Information or product availability to order, please contact: [lydia.tsai@mic.com.tw](mailto:lydia.tsai@mic.com.tw) diff --git a/content/blog/unchaining-the-data-center-with-openpower-reengineering-a-server-ecosystem.md b/content/blog/unchaining-the-data-center-with-openpower-reengineering-a-server-ecosystem.md new file mode 100644 index 0000000..f16d64f --- /dev/null +++ b/content/blog/unchaining-the-data-center-with-openpower-reengineering-a-server-ecosystem.md @@ -0,0 +1,28 @@ +--- +title: "Unchaining the data center with OpenPOWER: Reengineering a server ecosystem" +date: "2014-08-12" +categories: + - "blogs" +--- + +By Michael Gschwind, STSM & Senior Manager, System Architecture, IBM [![33601413](images/33601413.jpg)](https://openpowerfoundation.org/wp-content/uploads/2014/08/33601413.jpg) + +Later today at [HOT CHIPS](http://www.hotchips.org/) a leading semiconductor conference, I will be providing an update on IBM’s POWER8 processor and how, through the [OpenPOWER Foundation](http://www.openpowerfoundation.org/), we are making great strides opening the processor up not just from a hardware perspective, but also at the software level. + +It was at this same show last year that my colleague IBM POWER hardware architect Jeff Stuecheli first revealed how POWER8 would be made open for development.  This move has been met with great excitement over the past twelve months and has been seen as an important milestone because, with the advent of Big Data, companies are demanding more from their data centers -- more than what commodity servers built on decades old PC-era technology can deliver. POWER technology is designed specifically to meet these demands and, because it is open, it frees technology providers to innovate together and accelerate industry advancement. + +Other than being a significant technical and open development milestone, POWER8 is also the basis for the OpenPOWER Foundation, an open technical organization formed by data center industry leaders that enables data center operators to rethink their approach to technology. In a world where there’s constant tension between the need for standardization and the need for innovation, OpenPOWER was created to foster an open ecosystem, using the POWER architecture to share expertise, investment, and server-class intellectual property to serve the evolving needs of customers. + +OpenPOWER is about **_choice_** in large-scale data centers: + +- **The choice to differentiate —** Through the Foundation, members can build workload optimized solutions customized for servers and use best-of-breed-components from an open ecosystem, instead of settling for “one size fits all.” This will in turn increase value. +- **The choice to innovate —** The OpenPOWER Foundation offers a collaborative environment where members can jointly create a vibrant open ecosystem for data centers. +- **The choice to grow —** Each member of the Foundation can implement new capabilities instead of relying on technology scaling of a stagnant PC architecture that has run out of headroom to grow. + +After all that has been accomplished through the OpenPOWER Foundation on the hardware side, today I want to share some new advances on the software side. First of all, **I am happy to announce that** **_The New OpenPOWER Application Binary Interface_** **(ABI) has been published**. The ABI is a collection of rules for the OpenPOWER Foundation with the scope of standardizing the inter-operation of application components. This is significant because, when programs are optimized by compilers, we can all be more efficient. + +Second, **the OpenPOWER Vector SIMD Programming Model has been implemented**. This program transcends traditional hardware-centric SIMD programming models with the scope of creating intuitive programming models and facilitating application portability while enabling compilers to optimize OpenPOWER workloads even better. + +These advancement were made possible through consultation with OpenPOWER members, and they will grant more room for bringing in innovation at all levels of the hardware and software stacks. + +The OpenPOWER Foundation’s collaborative innovation is already changing the industry and major data center stakeholders are joining OpenPOWER. If you want to learn more about the OpenPOWER Foundation visit [http://openpowerfoundation.org/](https://openpowerfoundation.org/) diff --git a/content/blog/updated-openpower-specifications-openpower-advanced-accelerator-adapter-compliance-25g-i-o-test-harness-and-test-suite-specification-and-openpower-architecture-compliance-definition.md b/content/blog/updated-openpower-specifications-openpower-advanced-accelerator-adapter-compliance-25g-i-o-test-harness-and-test-suite-specification-and-openpower-architecture-compliance-definition.md new file mode 100644 index 0000000..81dcb6d --- /dev/null +++ b/content/blog/updated-openpower-specifications-openpower-advanced-accelerator-adapter-compliance-25g-i-o-test-harness-and-test-suite-specification-and-openpower-architecture-compliance-definition.md @@ -0,0 +1,46 @@ +--- +title: "Updated OpenPOWER Specifications -- OpenPOWER Advanced Accelerator Adapter Compliance: 25G I/O Test Harness and Test Suite Specification and OpenPOWER Architecture Compliance Definition" +date: "2019-04-11" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Sandy Woodward, OpenPOWER Foundation Compliance Work Group Chair, IBM Academy of Technology Member_ + +The OpenPOWER Foundation board recently approved two updated Compliance Work Group documents that are posted on the OpenPOWER Foundation technical resources page. These documents serve as references to both OpenPOWER members and non-members alike who are interested in OpenPOWER. + +1) [OpenPOWER Advanced Accelerator Adapter Compliance: 25G I/O Test Harness and Test Suite Specification](https://openpowerfoundation.org/?resource_lib=advanced-accelerator-adapter-compliance-25g-i-o-test-harness-and-test-suite-specification) + +This document completed its public review and was approved for publish as a final Work Group Specification. + +2) [OpenPOWER Architecture Compliance Definition](https://openpowerfoundation.org/?resource_lib=openpower-architecture-compliance-definition-review-draft) + +The Compliance Work Group updated the 2016 document with current information including input OpenPOWER Specifications from other OpenPOWER Work Groups and overview of Compliance Test Harness and Test Suite specifications. This document completed it's public review on March 29, 2019. + +**OpenPOWER Advanced Accelerator Adapter Compliance: 25G I/O Test Harness and Test Suite Specification** + +The purpose of the _OpenPOWER Advanced Accelerator Adapter Compliance: 25G I/O Test Harness and Test Suite_ specification is to provide the test suite requirements to demonstrate OpenPOWER Advanced Accelerator Adapter 25G I/O compliance for POWER9™ systems, such as for the OpenCAPI 3.0 interconnect. + +The input to this specification is the following specification: + +- [_OpenPOWER Advanced Accelerator Adapter Electro-Mechanical Specification_](https://openpowerfoundation.org/?resource_lib=advanced-accelerator-adapter-electro-mechanical-specification) which describes electro-mechanical specification for advanced accelerator adapters within the OpenPOWER ecosystem supported by IBM POWER9. + +There are two accelerator approaches for the 25Gbit/sec interface and the compliance for each approach is defined in this document. The first approach is a Mezzanine Adapter Card which is attached to the system planar via two connectors. The Mezzanine Adapter Card for OpenPOWER systems based on the POWER9 processor attaches to the 25Gbit/sec interface native to the POWER9 and plugs into the mezzanine card connectors. + +The second approach is a Cabled Interface Extension to an adapter card. It uses a PCIe® card as an example but the cabled extension does not require the adapter card be PCIe. POWER9 platforms support the optional cabling of the 25Gbit/sec Advanced Accelerator Interface to the advanced accelerator adapter in a riser card plugged into a PCIe slot in the same system. In addition, the adapter could be located in different drawer of the rack. + +**OpenPOWER Architecture Compliance Definition** + +The purpose of the _OpenPOWER Architecture Compliance Definition_ document is to give a consistent approach to compliance under the guidance of the Compliance Work Group. It contains the following: + +- Document the OpenPOWER specifications that contain the interfaces that are required to be OpenPOWER compliant +- Document an overview of the Compliance Test Harness and Test Suite Specifications that have been developed in the Compliance Work Group, and an outline of the contents expected in each specification +- Document procedures on how to measure and document compliance and where to submit the report for compliance + +This version of the document is based on the POWER8™ systems and the POWER9™ systems. It is expected that this document shall be updated for additional POWER8 systems interfaces, additional POWER9 systems interfaces, and for next generation OpenPOWER systems. + +The OpenPOWER Architecture Compliance Definition document and the OpenPOWER Advanced Accelerator Adapter Compliance: 25G I/O Test Harness and Test Suite Specification are Standards Track, Work Group Specifications owned by the Compliance Workgroup and handled in compliance with the requirements outlined in the OpenPOWER Foundation Work Group (WG) Process document. + +If you have comments you would like to make on these new specification documents, comments can be submitted to the Compliance Workgroup by emailing <[openpower-arch-comp-def@mailinglist.openpowerfoundation.org](mailto:openpower-arch-comp-def@mailinglist.openpowerfoundation.org)­­> or <[openpower-25gio-thts@mailinglist.openpowerfoundation.org](mailto:openpower-25gio-thts@mailinglist.openpowerfoundation.org)\>. diff --git a/content/blog/user-feedback-ibm-power9-functional-simulator.md b/content/blog/user-feedback-ibm-power9-functional-simulator.md new file mode 100644 index 0000000..5b298d6 --- /dev/null +++ b/content/blog/user-feedback-ibm-power9-functional-simulator.md @@ -0,0 +1,39 @@ +--- +title: "User Feedback on the IBM® POWER9 Functional Simulator" +date: "2018-02-14" +categories: + - "blogs" +tags: + - "openpower" + - "openpower-foundation" + - "power9" + - "power9-functional-simulator" +--- + +By Leif Reinert, Bradford Thomasson and Saif Abrar + +Earlier this week, we introduced the availability of the POWER9 Functional Simulator. Our team is proud to offer the simulation environment, which can be [downloaded from our website](https://www-304.ibm.com/webapp/set2/sas/f/pwrfs/pwr9/home.html). + +The POWER9 Functional Simulator has already been put to the test for a variety of use cases. Two early users shared their feedback with us on how the tool helped solve a problem they were facing. + +**User Testimony 1:** + +### **Compiler optimization of GCC on POWER9** + +Problem: LZ4 compression in the pipeline driven by the improper instruction sequence neg-and-cntlzd-subfic on POWER8. Dependency on the previous instruction in the sequence resulted in many ISU rejections due to sources not readily available resulted in reduced performance. + +Instruction traces generated on the POWER8 Functional Simulator, then post processed and analyzed by “sim\_ppc” tools revealed an FXU dependency chain delaying instruction completion for the mentioned sequence of instructions. + +Solution: For POWER9, the new instruction cnttzd (Count Trailing Zeros Dword) was utilized as a single instruction replacement. A comparison between the LZ4 compression simulations on the POWER8 and the POWER9 Functional Simulator and their post processing “sim\_ppc” companion tools revealed a significant performance gain by implementing the new instruction on POWER9. + +**User Testimony 2:** + +### **Compiler Comparison of LLVM vs GCC (Eigen-Quatmul workload compilation)** + +Problem: GCC compiled version is 36% slower than LLVM compiled version. + +GCC Compiled version of the Quatmul workload was executed on the POWER9 Functional Simulator and instruction traces were generated. Instruction traces post processed and analyzed on “sim\_ppc” tools revealed vector load issues before all four scalar stores, this resulted in multiple store-hit-load flushes until eventually all the stores executed ahead of the load. + +Solution: Inserting an isync instruction between the scalar stores and the vector load, similar to LLVM, prevents store hit load flushes. The load keeps rejecting until the data is fully contained and then executes. Observed performance improvement of ~30%. + +In bringing the POWER9 Functional Simulator and its companion tools to the public, we are excited to provide an ideal platform for engineers and developers to explore and continue to build out the POWER9 platform. If you have any technical inquiries or suggestions, please reach out to our Cognitive Systems Simulation team through the [Customer Connect Support Channel](https://www.ibm.com/technologyconnect/issuemgmt/home.xhtml). diff --git a/content/blog/using-docker-in-high-performance-computing-in-openpower-environment.md b/content/blog/using-docker-in-high-performance-computing-in-openpower-environment.md new file mode 100644 index 0000000..c4a0999 --- /dev/null +++ b/content/blog/using-docker-in-high-performance-computing-in-openpower-environment.md @@ -0,0 +1,30 @@ +--- +title: "Using Docker in High Performance Computing in OpenPOWER Environment" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Introduction to Authors + +Min Xue Bin: Male, IBM STG China, advisory software engineer, LSF developer, mainly focus on High Performance Computing. Ding Zhao Hui: Male, IBM STG China, Senior LSF architect, mainly focus on LSF road map. Wang Yan Guang: Male, IBM STG China, development manager for LSF/LS. + +### Background + +OpenPOWER will be one of major platforms in High Performance Computing (HPC). IBM Load Sharing Facility (LSF) is the most famous cluster workload management software aimed to explore computation capacity of clusters to the maximum in HPC, and LSF is proved running well on OpenPOWER platform. As an open platform for developers and system administrators to build, ship and run applications, Docker has been widely used in cloud. Could we extend Docker benefits to HPC? Yes, we do. By integrating LSF and Docker in OpenPOWER platform, we achieved better application Docking in OpenPOWER HPC. + +### Challenges + +In HPC, there are lots of complex customer workloads which depend on multi-packages, libraries, and environment. It is hard to control customer workload resource guarantee, performance isolation, application encapsulation, repeatability and compliance. + +### Our experience + +We enabled LSF work in openPOWER environment, starting from IBM Power8 Little Endian. We also port Docker to the platform too. Based on that, we finished integration between LSF and Docker to extend its benefits to openPOWER HPC area. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Sanjabi-Sam_OPFS2015_IBM_v2-2.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/using-nvm-express-ssds-and-capi-to-accelerate-data-center-applications-in-openpower-systems.md b/content/blog/using-nvm-express-ssds-and-capi-to-accelerate-data-center-applications-in-openpower-systems.md new file mode 100644 index 0000000..c954db9 --- /dev/null +++ b/content/blog/using-nvm-express-ssds-and-capi-to-accelerate-data-center-applications-in-openpower-systems.md @@ -0,0 +1,30 @@ +--- +title: "Using NVM Express SSDs and CAPI to Accelerate Data-Center Applications in OpenPOWER Systems" +date: "2015-01-16" +categories: + - "blogs" +--- + +### Organization + +PMC-Sierra, OpenPOWER Silver Member + +### Objective + +The objective of this presentation is to showcase how NVM Express and CAPI can be used together to enable very high performance application acceleration in Power8 based servers. We target applications that are of interest to large data-center/hyper-scale customers such as Hadoop/Hive (map-reduce) and NoSQL (e.g. Redis) databases. The talk will discuss aspects of NVM Express, CAPI and the hyper-threading capabilities of the Power9 processor. + +### Abstract + +NVM Express is a standards based method of communication with PCIe attached Non-Volatile Memory. An NVM Express open-source driver has been an integrated part of the Linux kernel since March 2012 (version 3.3) and allows for very high performance. Currently there are NVM Express SSDs on the market that can achieve read speeds of over 3GB/s. A simple block diagram of the configuration. A PCIe NVM Express SSD and a CAPI accelerator card are connected to a Power8 CPU inside a Power8 server. We present results for a platform consisting of an NVM Express SSD, a CAPI accelerator card and a software stack running on a Power8 system. We show how the threading of the Power8 CPU can be used to move data from the SSD to the CAPI card at very high speeds and implement accelerator functions inside the CAPI card that can process the data at these speeds. We discuss several applications that can be serviced using this combination of NVMe SSD, CAPI and Power8. + +### Bio + +[Stephen Bates](https://www.linkedin.com/profile/view?id=9259869&authType=NAME_SEARCH&authToken=0WuR&locale=en_US&srchid=32272301421438709217&srchindex=1&srchtotal=638&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438709217%2CVSRPtargetId%3A9259869%2CVSRPcmpt%3Aprimary) is a Technical Director at PMC-Sierra, Inc. He directs PMC's Non-Volatile Memory characterization program and is an architect for PMC’s Flashtec™ family of SSD controllers. Prior to PMC he taught at the University of Alberta, Canada. Before that he worked as a DSP and ECC. He has a PhD from the University of Edinburgh and is a Senior Member of the IEEE. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Bates-Stephen_OPFS2015_PMC-Sierra_031015_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/using-power-technology-to-detect-deepfakes.md b/content/blog/using-power-technology-to-detect-deepfakes.md new file mode 100644 index 0000000..a7151e8 --- /dev/null +++ b/content/blog/using-power-technology-to-detect-deepfakes.md @@ -0,0 +1,33 @@ +--- +title: "Using POWER Technology to Detect Deepfakes" +date: "2020-01-21" +categories: + - "blogs" +tags: + - "ibm" + - "openpower-foundation" + - "power9" + - "ibm-ac922" + - "deepfakes" + - "deepfake-technology" +--- + +[Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), OpenPOWER Leader in Education and Research, IBM Systems + +Our society is struggling with deepfake images and videos and the harmful impact that they can have by spreading misinformation. Detecting these malicious efforts will become even more difficult as the technology becomes more advanced. As deep-fake videos continue to influence public opinion, it’s becoming increasingly important to develop technology that can detect and reveal deepfakes as the false information they are. + +This is what makes the work of Pranjal Ranjan, Sarvesh Patil, Badhrinarayan Malolan, Ankit Parekh & Saksham Singh at Veermata Jijabai Technological Institute (VJTI) Mumbai so significant and exciting. The students presented their work on deepfake detection at the 26th [IEEE International Conference on High Performance Computing, Data, And Analytics](https://hipc.org/) held in Hyderabad, India last month. The Conference serves as a platform for showcasing current work by researchers in the field of high power computing. + +To build a deepfake detection program, the students used a number of videos involving facial reenactments, where the facial movements and words from one person are swapped with those of another, creating a video where a person appears to be saying something that they did not, in fact, say. + +The students’ video, at the bottom of this article, for example, demonstrates how deepfake technology can be used to edit a video of former US President Barack Obama as a public address, so that the President appears to be saying something originally said by actor and director Jordan Peele. + +![Detecting deepfakes with AI trained on POWER9.](images/HiPC.png) + +By examining the technology behind facial reenactment, the students were able to detect the exact location of the facial manipulation in a fake image or video, and therefore reveal the deepfake. + +The students partnered with the University of Oregon to receive access to their Power9 systems to train their models. By using the computational power of the IBM AC922 server, containing 4 NVIDIA Tesla V1000 GPUs, the students found a 30% performance boost over other similar setups. This allowed them to train their model more efficiently and quickly. + +Learn more about the students’ work and demonstration here: + + diff --git a/content/blog/vereign-openpower-summit-europe.md b/content/blog/vereign-openpower-summit-europe.md new file mode 100644 index 0000000..89d803a --- /dev/null +++ b/content/blog/vereign-openpower-summit-europe.md @@ -0,0 +1,20 @@ +--- +title: "Building an Open Trustworthy Stack: Vereign at OpenPOWER Summit Europe" +date: "2018-11-02" +categories: + - "blogs" +tags: + - "featured" +--- + +By: Georg Greve, co-founder & president, Vereign + +I was fortunate enough to speak at this year’s OpenPOWER’s European Summit on behalf of Vereign, a solution to seamlessly add self-sovereign identity, authenticity and privacy to any kind of application or service. My presentation “Identity, Authentication and Privacy for 4 Billion People” covered how this solution will resolve the issues affecting email users today. + +Email is the most important communication network on the planet. Used by nearly four billion users, transmitting 281 billion messages a day, it is far larger than any social network or messenger platform to date. Email is also the only platform that is not under the control of a single vendor. In short: email is part of the lifeblood of business and personal communication. But it has also become rife with malware, with more than 92% of all cyber attacks conducted via email, including identity theft and business email compromise. + + + +Vereign offers the solution: a global self-sovereign identity and personal data under user control. Verified identity, message authenticity and privacy make email not only the largest, but also the most social communication network. Vereign upgrades email to being the most reliable and authentic communication method, seamlessly upgrading existing providers, platforms and solutions. This has become possible on a purely open, trustworthy stack built on OpenPOWER. + +Vereign will go into production next year, but you don't have to wait to try it out for yourself. There is a public beta coming that will be limited in numbers at first. Join [https://beta.vereign.com/](https://beta.vereign.com/) to get early access and learn more at [https://www.vereign.com/](https://www.vereign.com/). diff --git a/content/blog/video-ibm-and-openpower-partner-with-oak-ridge-national-labs-to-solve-worlds-toughest-challenges.md b/content/blog/video-ibm-and-openpower-partner-with-oak-ridge-national-labs-to-solve-worlds-toughest-challenges.md new file mode 100644 index 0000000..5e28358 --- /dev/null +++ b/content/blog/video-ibm-and-openpower-partner-with-oak-ridge-national-labs-to-solve-worlds-toughest-challenges.md @@ -0,0 +1,27 @@ +--- +title: "Video: IBM and OpenPOWER Partner with Oak Ridge National Labs to Solve World's Toughest Challenges" +date: "2015-12-15" +categories: + - "videos" + - "blogs" +tags: + - "openpower" + - "hpc" + - "oak-ridge-national-lab" + - "supercomputing" + - "summit" +--- + +_By Jack Wells, PhD, Director of Science, Oak Ridge National Laboratory_ + +https://www.youtube.com/watch?v=rn1t\_T2QbSY + +The mission of Oak Ridge National Laboratory (ORNL) is to deliver scientific discoveries and technical breakthroughs that will accelerate the development and deployment of solutions in clean energy and global security, and in doing so create economic opportunity for the nation. By partnering with OpenPOWER, we are using the next generation POWER and GPU processor technologies to build Summit, a supercomputer that will have 5x-10x greater performance than today's leadership systems. + +With Summit in place, ORNL will be able to better focus our scientific and technical expertise and apply our leadership-class data and compute infrastructure to solve some of the greatest challenges of our time. We will be able to provide new insights related to climate change, understand the molecular machinery of the brain, better control combustion for cleaner running engines and perform a full physics simulation of ITER to improve performance of the fusion of this reactor. + +To learn more, watch this behind the scenes look at the process here at ORNL. + +* * * + +[![wells photo](images/wells-photo-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/wells-photo.jpg)_Jack Wells is the Director of Science for the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science national user facility and is responsible for the scientific outcomes of the OLCF’s user programs. Wells has previously lead both ORNL’s Computational Materials Sciences group in the Computer Science and Mathematics Division and the Nanomaterials Theory Institute in the Center for Nanophase Materials Sciences.  Prior to joining ORNL as a Wigner Fellow in 1997, Wells was a postdoctoral fellow within the Institute for Theoretical Atomic and Molecular Physics at the Harvard-Smithsonian Center for Astrophysics._ diff --git a/content/blog/welcome-antmicro-to-the-openpower-foundation.md b/content/blog/welcome-antmicro-to-the-openpower-foundation.md new file mode 100644 index 0000000..412252c --- /dev/null +++ b/content/blog/welcome-antmicro-to-the-openpower-foundation.md @@ -0,0 +1,71 @@ +--- +title: "Welcome Antmicro to the OpenPOWER Foundation" +date: "2020-07-21" +categories: + - "blogs" +tags: + - "fpga" + - "openpower-foundation" + - "power-isa" + - "microwatt" + - "risc-v" + - "open-source-hardware" + - "renode" + - "antmicro" + - "a2i" + - "chips-alliance" +--- + +By: James Kulina Executive Director, OpenPOWER Foundation + +This May, [Antmicro announced support for the POWER ISA in Renode](https://antmicro.com/blog/2020/05/microwatt-power-isa-in-renode/), its open source, multi-architecture, heterogeneous multi-core capable simulator for software development and software-hardware co-development. + +It’s an exciting development, as developers can now test applications based on the POWER ISA before running them on actual hardware. It’s an important step in achieving the vision of the OpenPOWER Foundation - to make POWER the easiest architecture on which to go from an idea to a silicon chip. + +I recently caught up with [Michael Gielda](https://www.linkedin.com/in/mgielda/?originalSubdomain=pl), VP of business development, to discuss Antmicro, its role in the OpenPOWER Foundation ecosystem and its beliefs on open source hardware in general. + +![](images/Renode-OpenPOWER-1024x675.png) + +**Can you tell us more about [Antmicro](https://antmicro.com/) and what your company does?** + +Of course! Antmicro is a software-driven tech company developing modern computing solutions for our customers based on open source platforms and paradigms. We provide engineering services and strategic guidance across a broad range of open source hardware and software solutions that we actively create and contribute to, to meet the needs of clients who are looking for world-class, future-proof, modular and scalable systems. + +Our work often involves building heterogeneous Cloud-to-Edge AI and vision processing systems, custom FPGA solutions, FPGA & ASIC development tooling etc. The broad range of open source technologies that Antmicro develops includes open source containerization, virtualization, device management, robotics, networking and AI libraries, operating systems, parsers, simulators, synthesis, place and route tools and more. + +**What inspired Antmicro to join the OpenPOWER Foundation?** + +We believe that the entire processing technology stack can benefit from becoming open source, and open ISAs are just a logical consequence of open source software and hardware that came before them. + +In early 2010 we were one of the few companies using OpenRISC commercially, and then when RISC-V came along, we quickly became a Platinum Founding member and one of the first companies building real solutions using the architecture. CHIPS Alliance, where we also play a very active role, takes the vision of open silicon from “just” the ISA and cores to entire chips and related workflows, and when the POWER ISA became open source, given our strong belief in a vendor-neutral, multi-solution ecosystem that is needed to make open hardware a reality, it was only a matter of time for us to join OpenPOWER. + +The immediate stimulus for joining was related to our work in the FPGA softcore space and implementing support for the POWER ISA and Microwatt in Renode - our open source simulation framework for software and hardware co-development. We strongly believe that POWER has a role to play in the server space and other use cases, and providing open source implementations like [Microwatt](https://openpowerfoundation.org/openpower-summit-north-america-2019-introducing-the-microwatt-fpga-soft-cpu-core/) and more recently [A2I](https://openpowerfoundation.org/a2i-power-processor-core-contributed-to-openpower-community-to-advance-open-hardware-collaboration/) are incredibly important for driving a collaborative ecosystem. + +**What do you hope to contribute and gain as a member?** + +Our aims and ambitions are aligned with the efforts of RISC-V International, CHIPS Alliance and OpenPOWER Foundation to create an open hardware ecosystem with robust tools and workflows for software-driven chips development. We hope to be able to leverage our work with RISC-V and CHIPS to bring the SoC generators, FPGA IP, simulation, open source FPGA and ASIC design tools to the POWER space, and at the same time find partners and customers who want to build complete solutions on the POWER ISA and could use our open source software and hardware expertise and the broad pool of open source platforms that we develop and contribute to. + +We believe the OpenPOWER Foundation will be a key player in building an open source future of server platforms, which are a key element of our open source cloud-to-edge AI vision; we’re happy to be part of the group that will make it happen. + +**Can you tell us about your most recent announcement related to the Renode framework?** + +Renode is our open source simulation framework that speeds up embedded and IoT systems development and hardware / software co-development with its testing, CI and debugging functionalities. It has lately reached an important milestone with the 1.9 release which, among other things, comes with POWER ISA and Microwatt support. Thus, POWER has become the 2nd major open architecture in Renode’s portfolio after RISC-V - a development that confirms Renode as a truly multi-architecture simulator. + +The new release also brings a range of improvements all across the board, including better co-simulation capabilities, packaging, new platforms for both Arm, RISC-V and the afore-mentioned POWER, as well as some new and exciting use cases such as testing and benchmarking MCU-oriented machine learning software in a recent collaboration with Google’s TensorFlow Lite team. + +**What are your views on the future of open source architecture?** + +We envision the future of the open source architecture domain as more collaborative, software-driven and vibrant, where many parties work together to create state-of-the-art chips using well-established, reusable components, chiplets and interconnects. It’s a future in which multiple architectures coexist in a modular environment that is driven by openness and software-powered innovation. + +**Is there a trend that you are most excited about in open hardware?** + +The development that we find especially compelling is the emergence of open tooling and new methodologies that allow hardware developers to employ a software-based approach to programming silicon. The resulting simplification of FPGA and ASIC development flows is attracting people from various backgrounds into the ecosystem, which they enrich with their contributions, lowering the entry point and establishing exciting collaborations in the process. + +The ability to co-design hardware and software side by side has profound implications for AI-oriented hardware, where algorithmic advances are made rapidly and can often change the requirements for compute platforms in unexpected ways. The coming together of the hardware and software domains is an extremely exciting trend that will open a lot of opportunities. Since the establishment of Antmicro we have always supported and worked towards that trend. + +**Make a prediction - what will the state of open source hardware look like in 5-10 years?** + +We think that the collaboration enabled by open source will be the dominant driver for innovation in hardware design in the coming decade. An ecosystem of advanced open source EDA tooling targeting IPs written in multiple HDLs (and/or mixture thereof - Chisel, SystemVerilog, VHDL should all ‘just work’), together with collaboratively developed open source ISAs, IP frameworks and productivity tools, will make it easier than ever to create a dedicated ASIC design without spending billions in R&D. + +More flexible chips will be built through collaborations between many parties using chiplet technologies as well as fast and open interconnect standards like AIB. Programs such as Google’s open source shuttle program for the first open source SkyWater PDK will mean that more talent, including teams with a software background, can engage in building hardware, to the benefit of the latter. + +Through availability of open source tools and interoperability standards, complexity will be reduced by breaking systems apart into components. This model will help to solve issues more effectively by tackling them independently and collaboratively. diff --git a/content/blog/were-off-and-running-openpower-foundation-in-2019.md b/content/blog/were-off-and-running-openpower-foundation-in-2019.md new file mode 100644 index 0000000..b3649e8 --- /dev/null +++ b/content/blog/were-off-and-running-openpower-foundation-in-2019.md @@ -0,0 +1,24 @@ +--- +title: "We’re Off and Running – OpenPOWER Foundation in 2019" +date: "2019-03-07" +categories: + - "blogs" +--- + +By Hugh Blemings, Executive Director, OpenPOWER Foundation + +We're barely into March and already 2019 is shaping up to be an amazing year for OpenPOWER. + +The list of events we’ve taken part in around the world in just two short months is long. Multiple sessions ([including a lightning talk by yours truly](https://youtu.be/sMjRuqCNZe4?t=2360)) at [linux.conf.au](https://2019.linux.conf.au/) in Christchurch, New Zealand and AI Workshops in Mangalore, Barcelona and Tokyo. + +Two more events in India (Bangalore and Chennai) before a trip to San Francisco for IBM Think where we had a booth showing off developer systems from [Raptor](https://www.raptorcs.com/). As an aside, Raptor's entry level motherboard, the [Blackbird](https://raptorcs.com/BB/) even got a bit of [Instagram attention](https://www.instagram.com/p/Bt4fbSVjtfY/?utm_source=ig_web_button_share_sheet) from Linus Sebastian of Linus Tech Tips. At Think, OpenPOWER President Michelle Rankin and I also had the opportunity to do a presentation introducing OpenPOWER to audiences. + +And this month we’re not slowing down. You'll see OpenPOWER booths at both the [Southern California Linux Expo (SCaLEx 17](https://www.socallinuxexpo.org/scale/17x)) and the [Open Compute Project Global Summit](https://www.opencompute.org/summit/global-summit) in Pasadena and San Jose, Calif., respectively, as well as workshops from Sweden to Singapore and from Vermont to North Carolina. More details can be [found here](https://openpowerfoundation.org/events/). + +Our other big news is a major revamp of the [OpenPOWER Foundation website](https://openpowerfoundation.org/). The new site includes a Forums area where members of the OpenPOWER ecosystem can collaborate on everything from deep technical matters to end user solutions. Our thanks to [Scot Schultz](https://www.linkedin.com/in/scotschultz/) for his tireless efforts in leading the website revamp. + +While we’re certainly keeping busy in the first half of the year with events, we’re also really excited about what the second half of the year will bring when we host our U.S., Europe and Asia Summits. We’re well underway in planning these signature events, check our social channels for updates. + +As you can see, we’ve had a busy first two months of 2019, and we can’t wait to keep the momentum going with our members for the entire year! + +P.S. Keep an eye on the [Forums](https://openpowerfoundation.org/groups/) on the OpenPOWER site next week – might even be some prizes on offer… diff --git a/content/blog/what-does-open-mean-to-you.md b/content/blog/what-does-open-mean-to-you.md new file mode 100644 index 0000000..8582fa0 --- /dev/null +++ b/content/blog/what-does-open-mean-to-you.md @@ -0,0 +1,28 @@ +--- +title: "Video: What Does \"Open\" Mean to You?" +date: "2015-12-17" +categories: + - "videos" + - "blogs" +tags: + - "openpower" + - "video" +--- + +_By OpenPOWER Foundation_ + +Last month the OpenPOWER Foundation was in full force at Supercomputing 2015 in Austin, TX. We had a great time networking with other revolutionaries who are embracing open hardware to revolutionize the data center. We decided to meet with some OpenPOWER members and ask them a simple question: "What does 'open' mean to you?" These are their answers. + +https://www.youtube.com/watch?v=mZYWTg5-qfg&feature=youtu.be + +## Read more about how the OpenPOWER Foundation is leading the open hardware revolution + +- ### [Get to know the people behind the technology with People of OpenPOWER](https://openpowerfoundation.org/newsevents/people-of-openpower/) + +- ### [Video: IBM and OpenPOWER Partner with Oak Ridge National Labs to Solve World’s Toughest Challenges](https://openpowerfoundation.org/videos/video-ibm-and-openpower-partner-with-oak-ridge-national-labs-to-solve-worlds-toughest-challenges/) + +- ### [Workshop Recap: OpenPOWER Personalized Medicine Working Group](https://openpowerfoundation.org/blogs/workshop-recap-openpower-personalized-medicine-working-group/) + +- ### [NEC’s Service Acceleration Platform for Power Systems Accelerates and Scales Cloud Data Centers](https://openpowerfoundation.org/blogs/nec-acceleration-for-power/ "Permalink to NEC’s Service Acceleration Platform for Power Systems Accelerates and Scales Cloud Data Centers") + +- ### [Rackspace, OpenPOWER & Open Compute: Full Speed Ahead with Barreleye](https://openpowerfoundation.org/blogs/openpower-open-compute-rackspace-barreleye/) diff --git a/content/blog/why-openpower-why-now.md b/content/blog/why-openpower-why-now.md new file mode 100644 index 0000000..4114206 --- /dev/null +++ b/content/blog/why-openpower-why-now.md @@ -0,0 +1,73 @@ +--- +title: "Driving Open Collaboration in the Datacenter" +date: "2014-12-23" +categories: + - "blogs" +tags: + - "featured" +--- + +### _Rapid Growth of the OpenPOWER Foundation Reflects the Need for IT Collaboration and Innovation that Extends Down to the Chip_ + +By Calista Redmond, Director, OpenPOWER Global Alliances, IBM + +The computer industry is going through radical change, triggered by increasing workloads and decreasing chip performance gains, and OpenPOWER is innovating to meet the challenge. + +**In August 2013, IBM,** Google, Mellanox, NVIDIA and Tyan [announced plans](file:///C:\Users\sofia.barbieri\AppData\Local\Microsoft\Windows\Temporary%20Internet%20Files\Content.Outlook\S2LHBMSM\announced%20plans) to form OpenPOWER. [The OpenPOWER Foundation](https://openpowerfoundation.org/about-us/) was incorporated as a legal entity in December 2013. The last twelve months have brought us rapid membership growth across all layers of the stack – from chip to end users – and OpenPOWER members are already innovating and bringing offerings to market. + +As an open, not-for-profit technical membership group, the Foundation makes POWER hardware and software available for open development, as well as POWER intellectual property licensable to other manufacturers. The result is an open ecosystem, using the POWER Architecture to share expertise, investment, and server-class intellectual property to address the evolving needs of customers and industry. + +### Why OpenPOWER? Why Now? + +To understand why the industry is transforming so quickly, it’s important to recognize the industry forces that brought us here. There are a number of developments that have become clear and that inspired this new strategic shift for IBM and for the industry: + +1. **Silicon is not enough**. Moore’s Law predictions of performance improvements with each new generation of silicon have hit a physics wall and are no longer satisfying the price/performance ratios that clients and end users are looking for. +2. **Different and growing workload demands**. There is a tsunami of data flooding into organizations. In order to effectively manage the volume, address governance requirements, and get more value from data through analytics, data centers need to make adjustments to optimize for the new workload demands. This evolution is true today and will continue to change in the future. It is no longer satisfactory to take an all-purpose machine and deploy it for every workload. More specialization is required. +3. **Changing consumption model of IT**. The consumption model for many end users has become the cloud. Increasingly, users want to pay as they go and turn their IT services on and off like a utility. That has also led to cloud providers facing the need to specialize the hardware they deploy in their own data centers in order to effectively support this increasingly popular consumption model. Very large internet data centers and cloud service providers want to build their own, optimizing on price performance. +4. **The continued momentum and maturity of the open source software ecosystem**. Open source software has taken off. It has become a very mature ecosystem delivering at enterprise class and growing stronger every day. There is more and more reliance on the open software model. + +These four trends led IBM to reflect on its own strategy. To address new challenges, IBM needed to lead the industry change. Today, the OpenPOWER Foundation is addressing that need by becoming the catalyst for open innovation that is necessary throughout the entire stack, from chip through software. + +### Innovation and Customization Down to the Chip + +With the OpenPOWER Foundation, open development spanning software, firmware and hardware is the catalyst for change. + +The OpenPOWER Foundation acts as an enabler in the industry, bringing together thought leaders across multiple parts of the IT stack to innovate together. Rather than doing innovations one at a time – one partner at a time – organizations can do them in workgroups with multiple thought leaders and experts interacting together. This means innovation can be attained at multiple levels simultaneously so that there is much greater potential of beating the price/performance curve. The result is that we are creating an optimized software ecosystem, leveraging little endian Linux so software ports easily from x86 systems. + +Within the POWER chip, IBM has implemented CAPI (Coherent Accelerator Processor Interface), a capability that allows co-processors to attach directly to the POWER processor, making it easier and faster to offload tasks to specialized processors. CAPI enables systems designers to customize their systems specifically for their own workloads and user demands. By opening up the software and the hardware, right down to the chip, the Foundation is providing a forum for innovation – and making the results of that innovation broadly available. + +This is creating an optimized software ecosystem that is enabling a spectrum of Power servers in the market today. Today we have OpenPOWER members designing 12 specific versions of POWER systems around the world. This is merely the beginning of the proliferation of POWER systems we expect to see from OpenPOWER. + +This of the OpenPOWER model as a buffet-style approach where organizations can pick and choose what is going to work absolutely best for their particular workload. Essential elements may include memory, I/O, or acceleration as some examples with multiple options. + +### Addressing the Emerging TCO Challenge + +When we go out and talk to clients – and at IBM we are talking to end users every day – we used to have a total cost of ownership discussion that fit on one screen of a laptop. There were about six dials that they wanted to tune for their particular data center. Today, that TCO analysis is often many pages. There are many variables that organizations would like to fine-tune for the specific workloads but yet they also have a strong desire to simplify and to maximize their investment in the right number of configurations for their data center. + +Through the OpenPOWER Foundation, organizations are able to customize how they consume technology by making adjustments based on the POWER Architecture. There are other architecture options out there, but ours is the most open and the most mature for the enterprise data center. Delivering open choice, riveting performance, and competitive TCO pricing strengthen the long term value proposition our end users are realizing. + +### POWER Architecture Momentum + +December 2014 is the first anniversary of the incorporation of the OpenPOWER Foundation. + +We have worked very hard to get solutions and hardware reference boards available for the [public launch](https://openpowerfoundation.org/press-releases/openpower-foundation-unveils-first-innovations-and-roadmap/) which was announced in April 2014. By then, we had more than two dozen members. In July, we had contributed the [POWER8](file:///C:\Users\Joyce\AppData\Local\Microsoft\Windows\Temporary%20Internet%20Files\Content.Outlook\TCRGNKF6\BM’s%20new%20POWER8%20processor) firmware to open source, providing a significant signal to the market that we are very serious about enabling innovation and optimization going all the way down to the hardware level. Today, we count more than 80 OpenPOWER members. We are growing globally and now have more than a dozen members in Europe and over 20 members in Asia. + +Our members’ involvement is spread across the stack from the chip level with hardware optimizations of I/O and memory, and acceleration options, and we are growing now into software. [OVH](https://www-03.ibm.com/press/uk/en/pressrelease/45178.wss), a leading internet hosting provider based in France, has just launched an on-demand cloud service based on the IBM POWER8 processor, tuned specifically for big data, high performance computing, and database workloads. In the US, Rackspace just announced their intentions to fuse the best of OpenPOWER, Open Compute, and OpenStack to drive an ultimately open data center design for cloud providers and scale out data centers. + +We are also continuing to have conversations with nations that are interested in furthering their own unique domestic IT agenda as well as with large internet data centers that are moving very quickly into proof-of-concept stage with specific design points that they would like to hit for their data centers. + +Some of the key milestones the OpenPOWER Foundation has made possible include: + +- The introduction of the [IBM Data Engine for NoSQL](http://www.smartercomputingblog.com/tag/ibm-data-engine-for-nosql/) - Power Systems Edition, which features the IBM FlashSystem, and is the first solution to take advantage of CAPI, and speeds input/output and enables massive server consolidation. +- The launch of the [Power System S824L](http://www.theinquirer.net/inquirer/news/2373830/ibm-teams-with-nvidia-to-launch-power-systems-server-based-on-openpower-foundation), which leverages OpenPOWER Foundation technology to accelerate Java, big data and technical computing applications. Here, you see an 8x faster performance on analytics workloads and that is leveraging OpenPOWER innovations together with NVIDIA, which does GPU acceleration. +- The availability of the first non-IBM Power System now available from [TYAN](http://www.tyan.com/campaign/openpower/), a white box provider in Taiwan. +- Collaboration across [Jülich, NVIDIA, and IBM on a supercomputing center in Europe](http://www.hpcwire.com/2014/11/10/julich-tag-teams-ibm-nvidia-data-centric-computing/) +- Endorsement by the U.S. Department of Energy on the next generation supercomputing with a $325M contract award to OpenPOWER members +- [Launch of a CAPI with FPGA Acceleration developers kit](http://www.electronicsweekly.com/news/components/programmable-logic-and-asic/fpga-makes-supercomputer-run-faster-2014-11/) together with Altera and Nallatech +- Contribution of OCC firmware code for acceleration and energy management + +We now have six different workgroups spread across the software and hardware layers, as well as in the area of compliance, which are making progress on deliverables. We also have another five workgroups that are in proposal stages. And, we are continuing to expand our client deployments. + +We understand that it is no longer possible to accomplish what is needed at the software layer alone. What is needed is an open innovation model that goes all the way down to the chip. This is a mission no single company can or should drive alone. While we’re impressed with the momentum of this year, the strategy we’re on is taking root within the industry as thought leaders across the growing OpenPOWER community join in driving a new path forward. + +Happy first birthday OpenPOWER! diff --git a/content/blog/wistron-demonstrates-how-to-set-up-ibm-powerai-on-mihawk.md b/content/blog/wistron-demonstrates-how-to-set-up-ibm-powerai-on-mihawk.md new file mode 100644 index 0000000..f2413f1 --- /dev/null +++ b/content/blog/wistron-demonstrates-how-to-set-up-ibm-powerai-on-mihawk.md @@ -0,0 +1,127 @@ +--- +title: "Wistron Demonstrates How to Set Up IBM PowerAI on Mihawk" +date: "2019-04-30" +categories: + - "blogs" +tags: + - "featured" +--- + +By Wistron Corporation + +Applications like AI (Artificial Intelligence) and ML (Machine Learning) have been growing quickly in recent years. Developers need not only powerful systems to accelerate development progress but also a friendly development environment that can easily jumpstart the process. + +[Wistron POWER9 Mihawk](https://openpowerfoundation.org/?resource_lib=wistron-corp-p93d2-2p-mihawk) supports PCIe Gen4, which has twice the bandwidth of PCIe Gen3, and up to 10 PCIe slots, which is more flexible for users to install various devices such as GPU or FPGA for AI/ML/DL purposes. Please see [here](https://openpowerfoundation.org/?resource_lib=wistron-corp-p93d2-2p-mihawk) for more system information. + +IBM has vast experience and a wide technology presence in the AI domain. PowerAI is the key product and is designed for enterprises to start using AI technology more easily. IBM also delivers PowerAI in Docker, which reaps the benefits of containers that can deploy PowerAI to multiple servers faster and easier. + +Here we demonstrate how to set up an IBM PowerAI of Docker image on Mihawk. + +We use Ubuntu 18.04.1 with default kernel version as host OS. And other requirements of software components for PowerAI are: + +- NVIDIA CUDA : 10.1 +- NVIDIA cuDNN : 7.5 +- NVIDIA NCCL : 2.4.2 +- Conda Package : 4.5.12 +- Docker-CE : 18.06.1~ce~3.0~ubuntu +- NVIDIA Docker : 2.0 + +For NVIDIA components, the CUDA package can be found in [here](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=ppc64le&target_distro=Ubuntu&target_version=1804&target_type=deblocal), cuDNN is [here](https://developer.nvidia.com/rdp/cudnn-download) and NCCL is [here](https://developer.nvidia.com/nccl/nccl-download). These NVIDIA components are delivered via conda channel and do not need to be downloaded and installed manually. The driver can be downloaded [here](https://www.nvidia.com/Download/index.aspx?lang=en-us). + +The Conda package can be found in [here](https://repo.anaconda.com/archive/). Using Anaconda2 or Anaconda3 is acceptable. [IBM’s website](https://www.ibm.com/support/knowledgecenter/SS5SF7_1.6.0/navigation/pai_setupAnaconda.html) shows more details about how to install and set up Anaconda on PowerPC system. And we installed “Anaconda3-2019.03-Linux-ppc64le.sh” for this demonstration. + +To install Docker-CE, here is a [website](https://docs.docker.com/install/linux/docker-ce/ubuntu/) that describes more details for installation. + +And for the PowerPC platform, there is no “docker-ce-cli” in the repository, so this software package can be ignored. And the Docker-CE version for NVIDIA Docker 2.0 in Ubuntu 18.04 is **18.06.1~ce~3-0~ubuntu**. The available versions can be showed with the command: + +
$ apt-cache madison docker-ce
+ +_Figure 1: List available versions of Docker-CE_ + +![](images/Figure-1.png) + +  + +  + +  + +  + +So we need to specify the version to install Docker-CE as the command below: + +
$ sudo apt install docker-ce=18.06.1~ce~3-0~ubuntu containerd.io
+ +For NVIDIA Docker 2.0, the repository configuration can be found [here](https://nvidia.github.io/nvidia-docker/). And then install nvidia-docker2 and reload the Docker daemon configuration using the following commands: + +
$ sudo apt install nvidia-docker2
$ sudo pkill -SIGHUP dockerd
+ +After the requirements are installed successfully, we can start to download a PowerAI docker image. Here we download the latest version with all frameworks installed as an example. + +
$ docker pull ibmcom/powerai:1.6.0-all-ubuntu18.04-py3-ppc64le_cloud
+ +_Figure 2: Screenshot of PowerAI image downloaded complete_ + +![](images/Figure-2.png) + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +  + +To start the PowerAI container: + +
$ docker run --runtime=nvidia --env LICENSE=yes --env -t -i ibmcom/powerai:1.6.0-all-ubuntu18.04-py3-ppc64le_cloud /bin/bash
(Note: Use --runtime=nvidia to make sure a docker image that can use NVIDIA GPU modules and NVIDIA software packages from local host.)
+ +Or + +
$ nvidia-docker run --env LICENSE=yes --env ACTIVATE=all -t -i ibmcom/powerai:1.6.0-all-ubuntu18.04-py3-ppc64le_cloud /bin/bash
+ +_Figure 3: Screenshot of some frameworks in PowerAI_ + +![](images/Figure-3.png) + +  + +  + +  + +  + +  + +  + +With the IBM PowerAI solution, we can easily set up a development environment for AI applications on our P9 Mihawk. Users can focus more on their application developments with no need to worry about software dependency problems. + +If you would like to know more information, please contact us at [mike\_liao@wistron.com](mailto:mike_liao@wistron.com) or [phoebe\_li@wistron.com](mailto:phoebe_li@wistron.com). + +#### **About Wistron** + +![](images/Summit-China-5.jpg)As a long-standing partner with IBM, Wistron utilizes more than 10 years PowerPC design and manufacture experience to offer robust services across diverse technical platforms. Wistron provides tailored, flexible business models from barebones to rack integration delivery to meet various business needs. diff --git a/content/blog/wistron-demonstrates-pcie-gen4-power9.md b/content/blog/wistron-demonstrates-pcie-gen4-power9.md new file mode 100644 index 0000000..43c8164 --- /dev/null +++ b/content/blog/wistron-demonstrates-pcie-gen4-power9.md @@ -0,0 +1,66 @@ +--- +title: "Wistron Demonstrates PCIe Gen4 on Power9" +date: "2018-06-20" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Wistron Corporation_ + +[![Wistron P9 products with PCIe Gen4 ](images/Wistron-1-300x145.png)](https://openpowerfoundation.org/wp-content/uploads/2018/06/Wistron-1.png)[![Wistron P9 products with PCIe Gen4 ](images/Wistron-2-300x160.png)](https://openpowerfoundation.org/wp-content/uploads/2018/06/Wistron-2.png) + +Wistron P9 products with PCIe Gen4 + +For today’s complex HPC, Enterprise, and Data-Center workloads, the need for high-speed I/O is paramount – which is why PCIe Gen4 is one of the main features of the Wistron POWER9 product portfolio. To demonstrate the impact of PCIe Gen4 on system performance, we compared it to PCIe Gen3 performance on both POWER9 and x86 systems. + +## Mellanox ConnectX-5 100G/EDR dual-port InfiniBand Adapter + +First, we needed to have an add-in card which supports both PCIe Gen4 and Gen3. Considering the driver readiness on both x86 and OpenPOWER platform, we selected the Mellanox ConnectX-5 (Figure 1) for the test. The theoretical bandwidth is shown below in Table 1: + +
 PCIe Gen4 x16ConnectX-5 100G dual-portPCIe Gen3 x16
Formula16Gb/s * 16100Gb/s * 28Gb/s * 16
Bandwidth256 Gb/s200Gb/s128 Gb/s
+ +Table 1. Theoretical bandwidth + +[![Mellanox ConnectX-5 100G/EDR dual-port InfiniBand Adapter](images/Mellanox-300x225.png)](https://openpowerfoundation.org/wp-content/uploads/2018/06/Mellanox.png) + +Figure 1. Mellanox ConnectX-5 100G/EDR dual-port InfiniBand Adapter + +## Hardware Setup + +We have OpenPOWER P9 and x86 systems, and we mounted a ConnectX-5 card on the target PCIe slots of all systems and connected the EDR ports with Mellanox 100G cables individually. The detail configuration of our test is shown below in Table 2. + +
SystemWistron P91D2-2P-48 * 2Sugon I620-G30
CPUP9 Sforza 20core (160W) * 2Intel 8153 16core (125W) * 2
Memory256GB256GB
IB adaptorMellanox ConnectX-5Mellanox ConnectX-5
OSRHEL 7.5RHEL 7.5
OFEDMLNX_OFED_LINUX-4.3-3.0.2.1MLNX_OFED_LINUX-4.3-3.0.2.1
+ +Table 2. Configuration between OpenPOWER P9 and x86 systems + +## Bandwidth Average Result + +After installing OFED® successfully, its inbox commands are available under OS. We executed a “ib\_write\_bw” command to check the average I/O bandwidth of each link at the same time and summarize it. To achieve the upper limit of clients, we used P9 Gen4 as a server to connect different clients. The test result is shown below in Figure 2: + +[![Bandwidth Average of P9 Gen4, P9 Gen3 and x86 Gen3](images/Figure-2.png)](https://openpowerfoundation.org/wp-content/uploads/2018/06/Figure-2.png) + +Figure 2. Bandwidth Average of P9 Gen4, P9 Gen3 and x86 Gen3 + +The I/O bandwidth results meet our expectation. When we connect both ports from the P9 Gen4 slot, it reaches 96.6% of the theoretical bandwidth of PCIe Gen4. And when we use P9 Gen4 as a server and connect to PCIe Gen3 ports on P9 and x86 platforms, P9 still has a better performance - around 10% higher than the x86 platform. + +## Latency Result + +In the latency portion of our test, considering most users are still using x86 Gen3 as the client, we set up different servers and re-ran the same test with another command, “ib\_write\_lat,” in one link. The result is 2 bytes of latency as shown below in Figure 3: + +[![Latency Result of P9 Gen4 and x86 Gen3](images/Figure-3.png)](https://openpowerfoundation.org/wp-content/uploads/2018/06/Figure-3.png) + +Figure 3. Latency Result of P9 Gen4 and x86 Gen3 + +## Conclusion + +In this test, we set out to give user a picture of how PCIe Gen4 improves performance using a real device on a real system instead of using theoretical calculations. Although there’s no significant performance with latency using P9 Gen4, it provides superior performance with overall bandwidth. By nearly doubling bandwidth, users will have a better ROI and a lower TCO by utilizing a single high speed Gen4 capable network adapter, instead of two Gen3 adapters in each system. + +For more information, please contact: [EBG\_sales@wistron.com](mailto:EBG_sales@wistron.com) + +## About Wistron + +[![Wistron](images/Wistron-logo-300x101.jpg)](https://openpowerfoundation.org/wp-content/uploads/2018/06/Wistron-logo.jpg) + +As a long-standing partner with IBM, Wistron utilizes more than 10 years PowerPC design and manufacture experience to offer robust services across diverse technical platforms. Wistron provides tailored, flexible business models from barebones to rack integration delivery to meet various business needs. diff --git a/content/blog/wistron-greatly-reduces-model-training-time-with-openpower.md b/content/blog/wistron-greatly-reduces-model-training-time-with-openpower.md new file mode 100644 index 0000000..3c43f2d --- /dev/null +++ b/content/blog/wistron-greatly-reduces-model-training-time-with-openpower.md @@ -0,0 +1,64 @@ +--- +title: "Wistron Greatly Reduces Model Training Time With OpenPOWER" +date: "2019-03-28" +categories: + - "blogs" +--- + +By Wistron Corporation + +In the past few years, the computing capability of servers has been greatly enhanced by the amazing progress of NVIDIA GPUs, which has created a fervor for Artificial Intelligence, Machine Learning, and Deep Learning. Wistron Value Creation Center (VCC), a leading-edge engineering innovation division, assigned numerous engineers to bring various industry-changing products and solutions to the marketplace. However, due to the very limited GPU system resources in the engineering lab, most ideas never reached full project status. Model training consumes the most time during product development, therefore more powerful hardware is needed to reduce the training time so more concepts can be realized. For this reason, VCC looked for support from the OpenPOWER solution team. + +In one case, the project idea is to build up a patient-care-assistant which can detect changes to a patient’s status to help caregivers immediately provide the proper response. For this reason, the system must be taught to differentiate the meaning of different actions and whether the action was made by a patient or others. To achieve this goal, the training algorithm will include object detection and face recognition, both requiring extensive GPU computing resources. + +![](images/wistron-3.png) + +![](images/wistron-2.png) + +![](images/wistron-1.png) + +Figure 1. Example of patient status detection (Right and middle) and face recognition (Left) + +Before the project began, the OpenPOWER solution team summarized the request from VCC: 1) more GPU memory size, 2) high bandwidth and coherency between GPU-CPU and GPU-GPU, and 3) support simultaneous multi-user model training. Finally, they chose Polaris Plus— a dual socket 2U Power8 server with four NVIDIA SXM2 P100 GPU— to fulfill all VCC requirements. First of all, Polaris Plus has up to 64GB GPU memory with four GPU by 16 GB each. Second, Polaris Plus supports NVLink technology by the coherence between Power8 CPU and NVIDIA SXM2 P100 GPU, which provides 40 GB/s of bandwidth between every link of GPU-CPU and GPU-GPU. And the last, due to Polaris Plus has four physical independent GPU, the system is able to serve four users training model at the same time. + +![](images/wistron-4.jpg) + +  + +  + +  + +  + +  + +  + +  + +Figure 2. Power8 Polaris Plus + +![](images/wistron-5.jpg) + +  + +  + +  + +  + +  + +  + +According to VCC’s data, their original equipment—one GTX 1080 GPU installed on an x86 workstation—required approximately one month to train a big model with enough accuracy for commercial purposes. After using the OpenPOWER solution, only around 3.5 days were required for the same job. The result is not only huge time savings on training, but also on optimization. The project is still ongoing, but these initial results are very promising. + +If you are interested in GPU applications on OpenPOWER, please take a look at our latest Power9 product— [MiHawk](https://openpowerfoundation.org/?resource_lib=wistron-corp-p93d2-2p-mihawk). We also have another revolutionary product coming soon, which will enable users to expand their servers with up to 16 PCIe GPUs! + +If you would like to know more information, please contact us at [mike\_liao@wistron.com](mailto:mike_liao@wistron.com) or [phoebe\_li@wistron.com](mailto:phoebe_li@wistron.com). + +#### **About Wistron** + +As a long-standing partner with IBM, Wistron utilizes more than 10 years PowerPC design and manufacture experience to offer robust services across diverse technical platforms. Wistron provides tailored, flexible business models from barebones to rack integration delivery to meet various business needs. diff --git a/content/blog/wistron-introduces-new-concepts-and-demonstrates-mihawk-results-at-openpower-china-summit-2018.md b/content/blog/wistron-introduces-new-concepts-and-demonstrates-mihawk-results-at-openpower-china-summit-2018.md new file mode 100644 index 0000000..ad17031 --- /dev/null +++ b/content/blog/wistron-introduces-new-concepts-and-demonstrates-mihawk-results-at-openpower-china-summit-2018.md @@ -0,0 +1,41 @@ +--- +title: "Wistron Introduces New Concepts and Demonstrates MiHawk Results at OpenPOWER China Summit 2018" +date: "2019-01-23" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Wistron Corporation_ + +On December 12, when E-commerce companies were promoting their offerings during a key shopping period, a special event which supports industry progress was held in Beijing: [OpenPOWER China Summit 2018](https://openpowerfoundation.org/openpower-china-summit-2018/). Participants, including Wistron, enjoyed the great honor to join such a big event to share their achievements on OpenPOWER in the past year and explore new technologies and solutions from other companies. + +[![](images/OpenPOWER-Summit-China-Header.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2019/01/OpenPOWER-Summit-China-Header.png) + +As a [Gold Level member](https://openpowerfoundation.org/membership/) of OpenPOWER Foundation, Wistron has more than 16 years’ experience on PowerPC development. Utilizing Wistron’s server technology and IBM’s great support, we announced our first P9 OpenPOWER product—MiHawk—at OpenPOWER China Summit 2017, which is 100% designed and manufactured by Wistron. + +\[caption id="attachment\_6071" align="aligncenter" width="343"\]![](images/Summit-China-1.jpg) Wistron Deputy General Manager: Kang Pao\[/caption\] + +This year, we brought the MiHawk back to Beijing with two brand new concepts: NVMe and Turbo configurations. + +For the NVMe configuration, we developed a customized NVMe HBA for U.2 drives. With this adaptor and the advantage of PCIe lanes using the P9 LaGrange processor, MiHawk can install up to twenty-four NVMe U.2 drives and could reach up to 80GB/s on optimal IO bandwidth. For the Turbo configuration, we also made a customized riser card for NVIDIA SXM2 V100, which supports NVLink that has 3X+ speed up from the PCIe version. The riser can also support up to four NVMe U.2 drives. With the customized riser card and the NVMe HBA, Mihawk can install up to two NVIDIA SXM2 V100 and up to sixteen NVMe U.2 drives.  + +\[caption id="attachment\_6072" align="aligncenter" width="413"\]![](images/Summir-China-2.png) Wistron POWER9 server: MiHawk\[/caption\] + +Here is some data obtained from our previous demonstration on MiHawk: + +- IBM POWER9 LaGrange CPU has a 30% performance advantage over Intel Xeon Gold series CPU. +- With PCIe Gen4 support, the network throughput has more than a 75% advantage over PCIe Gen3. + +\[caption id="attachment\_6073" align="aligncenter" width="467"\]![](images/Summit-China-3.jpg) Wistron Technical Manager: Nathan Hsu\[/caption\] + +We are glad that our partner [Rambus](https://www.rambus.com/) also came to China to introduce their product—Proteus—at the event this year. Proteus is a customized platform for hybrid memory research, which can provide as low latency as DRAM with much lower cost. Proteus also supports OpenCAPI to perform data transmission, which has more than a 35% bandwidth advantage over PCIe Gen3 per connector. + +\[caption id="attachment\_6074" align="aligncenter" width="510"\]![](images/Summit-China-4.jpg) Rambus Sr. Director Kenneth Wright\[/caption\] + +We are very proud that our Mihawk is able to help Rambus in their hybrid memory research, just like our beneficial collaborating experience with CS2C, Redflag and Redoop. And Wistron will keep striving to be a leading hardware technology provider in the OpenPOWER ecosystem. + +**About Wistron** + +As a long-standing partner with IBM, Wistron utilizes more than 10 years PowerPC design and manufacture experience to offer robust services across diverse technical platforms. Wistron provides tailored, flexible business models from barebones to rack integration delivery to meet various business needs. diff --git a/content/blog/wistron-openfoam-motorbike.md b/content/blog/wistron-openfoam-motorbike.md new file mode 100644 index 0000000..764a093 --- /dev/null +++ b/content/blog/wistron-openfoam-motorbike.md @@ -0,0 +1,36 @@ +--- +title: "Wistron Demonstrates OpenFOAM MotorBike Benchmark Results on MiHawk" +date: "2018-09-26" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Wistron Corporation_ + +[OpenFOAM](https://www.openfoam.com/)® (Open Field Operation and Manipulation) is one of the most famous open source packages to solve complex fluid flows which involve chemical reactions, turbulence and heat transfer, acoustics, solid mechanics and electromagnetics. To evaluate how the Power9 system could improve OpenFOAM computing, we used the [MiHawk](https://openpowerfoundation.org/wp-content/uploads/2018/06/MiHawk-down.pdf)\--2U2S Power9 system to run the benchmark. We found that it only takes about 2/3 of runtime to finish the same job on MiHawk, meaning about a 50% performance improvement compared to a system with dual Intel Xeon Gold 6148 processors. + +We couldn’t wait to share the results with OpenPOWER Foundation members. + +## Test Environment Setup + +In this demonstration, we used MotorBike with 2M elements. The benchmark showed how long it takes to complete 100 iterations. The configuration of the test environment is seen in Table 1 below. + +\[caption id="attachment\_5760" align="aligncenter" width="623"\][![](images/Table-1-MotorBike-1.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/09/Table-1-MotorBike-1.jpg) Table 1. Test configuration with MotorBike\[/caption\] + +## Results + +We compared our results to benchmarks achieved by other processors [found here](http://openfoamwiki.net/index.php/Benchmarks). The summary is seen below in Figure 1. + +\[caption id="attachment\_5762" align="aligncenter" width="658"\][![](images/Figure-1-MotorBike-1.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/09/Figure-1-MotorBike-1.png) Figure 1. OpenFOAM Benchmarks summary results\[/caption\] + +## Conclusion + +With higher bandwidth between CPU and memory in Power9, there is an advantage for OpenFOAM computing on Power9 over other CPU models. Furthermore, this was a single node demonstration. The performance would be further optimized by using an IBM XL-compiler and SpectrumMPI for multi-nodes case. + +For more information, please contact: [EBG\_sales@wistron.com](mailto:EBG_sales@wistron.com) + +## About Wistron + +[![Wistron](images/Wistron-logo-300x101.jpg)](https://openpowerfoundation.org/wp-content/uploads/2018/06/Wistron-logo.jpg)As a long-standing partner with IBM, Wistron utilizes more than 10 years PowerPC design and manufacture experience to offer robust services across diverse technical platforms. Wistron provides tailored, flexible business models from barebones to rack integration delivery to meet various business needs. diff --git a/content/blog/wistron-supports-cs2c-linux-os-power9.md b/content/blog/wistron-supports-cs2c-linux-os-power9.md new file mode 100644 index 0000000..53f5526 --- /dev/null +++ b/content/blog/wistron-supports-cs2c-linux-os-power9.md @@ -0,0 +1,30 @@ +--- +title: "Wistron supports CS2C to develop new OpenPOWER Ready Linux OS for Power9" +date: "2018-08-08" +categories: + - "blogs" +tags: + - "featured" +--- + +By Wistron Corporation + +\[caption id="attachment\_5650" align="aligncenter" width="526"\][![](images/Mihawk.png)](https://openpowerfoundation.org/wp-content/uploads/2018/08/Mihawk.png) Wistron Mihawk\[/caption\] + +This June, CS2C (China Standard Software Co., Ltd.) announced their new product—[NKAS V7U2](https://openpowerfoundation.org/?resource_lib=china-standard-software-co-ltd-neokylin-linux-advanced-server%EF%BC%8Cneokylin-virtualization-manager%EF%BC%8Cneokylin-ha-cluster-software%EF%BC%8Cneokylin-load-balance-software%EF%BC%8Cneokylin) (Neokylin Linux Advanced Server Operating System V7 Update2) has support for the Power9 server and is certified as OpenPOWER Ready. It is not only an achievement of a team or company, but also a great demonstration of the strong connection between members in the OpenPOWER Foundation eco-system. + +CS2C already had good experience and a successful product for Power8, but wanted to verify their design and solution for the next-generation platform. However, it was not easy for a software company to identify a Power9 hardware partner. IBM stepped in and became the bridge between CS2C and Wistron. + +As a hardware provider and OpenPOWER Foundation Golden member, Wistron has a responsibility to support other participants, especially for software partners to deploy and optimize their products on a real system during development. We decided to provide our [Mihawk – 2U2S POWER9](https://openpowerfoundation.org/?resource_lib=wistron-corp-p93d2-2p-mihawk) solution for CS2C to develop their product. Mihawk is also certified as OpenPOWER Ready and provides great performance with the design (Table 1). + +We are very proud of our partner’s success, and we hope to help more customers like them in the future. + +\[caption id="attachment\_5649" align="aligncenter" width="982"\][![](images/Table-1.png)](https://openpowerfoundation.org/wp-content/uploads/2018/08/Table-1.png) Table 1. Wistron Mihawk specification\[/caption\] + +For more information, please contact: [mailto:EBG\_sales@wistron.com](mailto:EBG_sales@wistron.com). + +## About Wistron + +![Wistron](images/Wistron-logo-300x101.jpg) + +As a long-standing partner with IBM, Wistron utilizes more than 10 years PowerPC design and manufacture experience to offer robust services across diverse technical platforms. Wistron provides tailored, flexible business models from barebones to rack integration delivery to meet various business needs. diff --git a/content/blog/with-openpower-unicamp-shares-academic-research-across-brazil.md b/content/blog/with-openpower-unicamp-shares-academic-research-across-brazil.md new file mode 100644 index 0000000..f4ad7a6 --- /dev/null +++ b/content/blog/with-openpower-unicamp-shares-academic-research-across-brazil.md @@ -0,0 +1,73 @@ +--- +title: "With OpenPOWER, Unicamp Shares Academic Research Across Brazil" +date: "2016-06-22" +categories: + - "blogs" +tags: + - "featured" +--- + +_By Juliana Rodrigues, Student, Unicamp_ + +(This post appears as part of our Developer Series. To learn more about what developers are doing with OpenPOWER, visit the **[OpenPOWER Developer Challenge](http://bit.ly/1RUu76u)**) + +I was about four years old when I got my first computer, but it wasn't until I was 13 that I had my first experience with Linux. I didn't have a CD drive at the time, so I did a Debian Etch net-install on a dial-up connection. It only took about four hours until I lost my connection and had to start over. After a few more hours and a lot of work, when I saw the login screen I felt like I was diving into a new world. + +From there to programming was a short step. After the first months using Debian, I found out that it was possible to make small programs to automate daily tasks. The first one I've written was a python script that downloaded Soundcloud MP3s. At that time, I didn't have professional interest in computer programming, but it only took me a few more years to realize that I could have a fun and interesting career with computer science. + +## University of Campinas + +For college, [Unicamp](http://www.unicamp.br/unicamp/?language=en) wasn't a hard choice. Currently, Unicamp is the country’s biggest patent holder and [QS gives Unicamp the 11th position on their "Best Universities under 50 years" ranking](http://www.topuniversities.com/universities/universidade-estadual-de-campinas-unicamp). + +\[caption id="attachment\_3937" align="aligncenter" width="652"\]![UNICAMP](images/unicamp-campus.jpg) Unicamp\[/caption\] + +Unicamp also has a lot of research initiatives running inside its [Institute of Computing](http://www.ic.unicamp.br/en), one of the strongest in Brazil, and I couldn’t wait to leverage the Institute’s resources for my own research. In my first weeks, I met Professor Sandro Rigo and software engineer Rafael Sene, who introduced me to the OpenPOWER Foundation and the research projects they were developing in their Unicamp lab. My eyes sparkled and after two months I was all-in. + +In 2015, Unicamp became a member of the OpenPOWER Foundation. Before this union, we developed research in conjunction with IBM through our Linux Technology Center laboratory, located inside Unicamp. This lab now holds our OpenPOWER Lab, aiming to focus our research and development even more on the open-sourced Power architecture. + +Right now, our team consists of six people, all dedicated to advancing research on OpenPOWER. + +\[caption id="attachment\_3938" align="aligncenter" width="625"\]![From left to right: Rafael Sene, Klaus Kiwi, Julianna Rodrigues, Rodolfo Azevedo, Breno Leitão, Maurício Lorenzetti](images/unicamp-team-1024x768.jpg) From left to right: Rafael Sene, Klaus Kiwi, Julianna Rodrigues, Rodolfo Azevedo, Breno Leitão, Maurício Lorenzetti\[/caption\] + +## Sharing Research Across Brazil + +In Brazil, most of our top universities are public universities. This means most of our research funds come entirely from government programs, resulting in budget constraints for many projects. Conducting top research while keeping budgets low is a major challenge. + +The majority of Unicamp’s current Institute of Computing research covers a lot of knowledge areas and shares information with students at other institutes. This way, our research in computer science advances knowledge in other fields as well. + +In this scenario, we created the “Minicloud”. + +## The Minicloud Project + +To figure out a way for professors, researchers and students from Unicamp and beyond to conduct their research in an economic, trustworthy and scalable way, we knew we needed to be open. The Minicloud project consists of a completely open-source platform, from the foundation with OpenPOWER architecture up to the top, with OpenStack. + +This OpenPOWER and OpenStack project started as a part of Marcelo Araujo's Master thesis to evaluate virtualization performance of POWER8 processors and then became a major project in our lab. After months of studying, building, recompiling and adapting code, we were able to run our first demo OpenStack cloud powered by OpenPOWER. + +Currently, our laboratory and the Minicloud project is led by [Professor Rodolfo Azevedo](http://www.ic.unicamp.br/~rodolfo) and supports many researchers and OpenSource projects from around the world. It is conducted entirely inside Unicamp. Our infrastructure features POWER8 and POWER7 machines, comprising up to 720 virtual machines running simultaneously. + +## ![unicamp box](images/unicamp-box-225x300.jpg) IBM University Challenge + +When we knew that we would be participating in IBM’s Innov8 with POWER8 University Challenge, we saw it as a great opportunity to show the rest of the world what we were developing this whole year. The Minicloud is a big project that we envisioned for so long and that's what we wanted to talk about. + +More than an event, it was a prize for our lab and for me to be able to talk about or work, to meet so many interesting people and to learn about so many interesting projects. This experience has done nothing but motivate us to build an develop even more our projects and to contribute to our community. + +You couldn’t imagine how pleased we were that Unicamp and our Minicloud project took home the “Best in Show” award at IBM Interconnect! But it was just one example of the great OpenPOWER projects that were on display from other university teams. + +\[caption id="attachment\_3940" align="aligncenter" width="625"\]![The Unicamp team at IBM Interconnect](images/unicamp-best-in-show-1024x768.jpg) The Unicamp team at IBM Interconnect\[/caption\] + +## The Future + +We hope to keep improving our Minicloud and develop even bigger projects that will impact as many people as Minicloud does. We look forward to more opportunities and hope to give more significant contributions to the community. + +Beyond supporting external research with Minicloud, we conduct research and development inside our lab to build new tools and methods capable of optimizing open-source packages into Power architecture. We also work with many development teams to migrate packages to Power by having dedicated compiler machines. Among others, we're currently working on: + +- Performance Evaluation and Methods Investigation to accelerate seismic processing algorithms in POWER: We are working with the Petrolium Study Center (CEPETRO) to optimize their seismic algorithms that used to run in Xeon machines, to operate much faster using our POWER8 machines. We'll be using the Minicloud to test and develop the study. +- Optimization of FPGA in virtual machines parallel processing: We want to be able to add processing power to our cloud through the use of FPGAs. For that, we'll develop a solution from the ground up in order to allow our virtual machines to communicate properly with the FPGAs. +- Energy reduction through smart monitoring with OpenStack: Using energy detectors, we'll be able to adapt our consumption to our actual usage through a simple OpenStack plugin. +- Performance benchmark and comparison study of processor architectures: This project aims to build a reliable benchmark study between most known hardware architectures, complementing research that we developed in the past. + +## If you're curious about our projects, you can visit our webpage [openpower.ic.unicamp.br](http://openpower.ic.unicamp.br/) or get in touch with one of our staff members, and if you're interested in more great projects being developed with OpenPOWER, join the [OpenPOWER Developer Challenge](http://bit.ly/1RUu76u). + +* * * + +_![foto-descontraida](images/foto-descontraida-150x150.jpg)About Julianna Rodrigues Juliana is a computer science student at Unicamp. She works on many projects at the OpenPOWER Lab as a researcher, such as the implementation of the OpenStack platform for OpenSource research on top of POWER8 and a continuos integration cloud for the Power architecture._ diff --git a/content/blog/workshop-recap-openpower-academic-community-shares-latest-advances.md b/content/blog/workshop-recap-openpower-academic-community-shares-latest-advances.md new file mode 100644 index 0000000..1964e87 --- /dev/null +++ b/content/blog/workshop-recap-openpower-academic-community-shares-latest-advances.md @@ -0,0 +1,45 @@ +--- +title: "Workshop Recap: OpenPOWER Academic Community Shares Latest Advances" +date: "2016-01-06" +categories: + - "blogs" +tags: + - "sc15" + - "academic" +--- + +_By Ganesan Narayanasamy (OpenPOWER Academic Discussion Group Leader)_ + +Before SC15 would kick off on November 15th, about 40 of the 170+ members of the OpenPOWER Foundation were already gathered in Austin to discuss OpenPOWER technologies for High Performance Computing (HPC) and High Performance Data Analytics applications. Ready to share and discuss their work in emerging workload optimization for OpenPOWER, members of the OpenPOWER Academic Discussion Group (ADG) took an opportunity to network, share knowledge and explore commercial collaboration opportunities at the ADG’s first annual meeting.  Presentations from the meeting are available for download [here](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp). + +Participants included representatives \*from [NVIDIA](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/43553445785/1), [Jülich Supercomputing Centre](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/2/5370236193/43553419413/1), [Delft University of Technology](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/45164277089/1), [Texas Advanced Computing Center](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/44028264830/1), [Oak Ridge National Laboratory](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/45193771273/1), [A\*STAR Computational Research Centre](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/45164035393/1), the UK’s [STFC Hartree Centre,](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/45163874589/1) and IBM ([Apache Spark on Power](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/45157854529/1), [POWER8,](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/45165288453/1) [Genomics on Power](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/46018607265/1)). + +_\*direct links to presentations_ + +\[caption id="attachment\_2262" align="alignnone" width="625"\]![IMG_20151112_172029780_HDR](images/IMG_20151112_172029780_HDR-1024x576.jpg) Prof. Dirk Pleiter, Jülich Supercomputing Centre\[/caption\] + +From getting the most out of the [POWER processor’s advanced features](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/45165288453/1) and [GPU acceleration](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp/1/5370236193/43553445785/1) to [using new Big Data frameworks like Apache Spark along with FPGA acceleration](https://openpowerfoundation.org/blogs/genomics-with-apache-spark/), workshop participants shared their early work in exploiting OpenPOWER to deploy advanced system architectures capable of breakthrough performance when running the most challenging computing workloads. + +\[caption id="attachment\_2263" align="aligncenter" width="625"\]![IMG_20151112_180251479_HDR](images/IMG_20151112_180251479_HDR-1024x576.jpg) Dr. Jack Wells, Oak Ridge National Laboratory Director of Science, Oak Ridge Leadership Computing Facility\[/caption\] + +Coming together across international, disciplinary and academic boundaries, OpenPOWER ADG members united around common interests including fundamental computing challenges in HPC (parallelization, latency, bandwidth, job throughput) and use of the latest community and commercially supported software. It was a first step in launching formal collaboration around OpenPOWER across the Academic Community, who are among the first to address the newest, hardest problems in science and scientific computing. + +![group](images/group.jpg)As we begin the New Year I look forward to seeing OpenPOWER ADG members continue to shape and apply OpenPOWER technology to push the boundaries of computing ever further, together. + +# Learn More + +- Download presentations from the 1st Annual OpenPOWER ADG Workshop at SC15 in Austin [here](https://ibm.app.box.com/s/mauxelpxrnflbck4i3351co0wc6ijsyp). +- Read about the [OpenPOWER ADG’s **India** Summit 2015 held in Bangalore, India](https://www.linkedin.com/pulse/openpower-adg-summit-india-ganesan-narayanasamy-6084023981264949250?trk=prof-post). + +# Join Us! + +- If interested in joining the OpenPOWER Academic Discussion Group, please [email Ganesan Narayanasamy](mailto:ganesana@in.ibm.com) +- Visit the ADG at the [OpenPOWER Summit 2016](https://openpowerfoundation.org/openpower-summit-2016/) (check back here for details coming soon) +- Visit the ADG at [ISC 2016](http://www.isc-hpc.com/home.html) in Frankfurt (check back here for details) +- Join the 2nd annual ADG Workshop at [SC 2016](http://sc16.supercomputing.org/) in Salt Lake City (check back here for details) + +* * * + +**_About Author_** + +![](images/Screen-Shot-2016-01-05-at-5.36.01-PM-150x150.png)[Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy) is a Senior Manager with IBM Systems Lab and brings 15 years experience in High Performance Computing R&D and technical leadership to his many activities within the OpenPOWER Foundation including leadership of the Foundation's Academic Discussion Working Group.  He's passionate about working with universities and research institutes with whom he's currently working to help develop curriculum, labs, and centers of excellence around OpenPOWER technology. diff --git a/content/blog/workshop-recap-openpower-personalized-medicine-working-group.md b/content/blog/workshop-recap-openpower-personalized-medicine-working-group.md new file mode 100644 index 0000000..0795419 --- /dev/null +++ b/content/blog/workshop-recap-openpower-personalized-medicine-working-group.md @@ -0,0 +1,67 @@ +--- +title: "Workshop Recap: OpenPOWER Personalized Medicine Working Group" +date: "2015-12-18" +categories: + - "blogs" +tags: + - "genomics" + - "personalized-medicine" + - "transmart" +--- + +_By Zaid Al-Ars, Cofounder, Bluebee and Chair of the OpenPOWER Foundation Personalized Medicine Working Group_ + +More than 40 participants attended the OpenPOWER Personalized Medicine Workshop in Austin, TX on November 15, 2015.  The workshop gathered leading experts to address computational technology in the field of personalized medicine including challenges, opportunities and future developments. + +Separate sessions featured the perspectives of clinical users, technology providers, and HPC researchers, followed by a panel discussion on overall industry challenges and trends. + +# **Session 1: Clinical Users Perspective** + +[Dr. John Zhang (MD Anderson)](http://www.mdanderson.org/education-and-research/departments-programs-and-labs/programs-centers-institutes/institute-for-applied-cancer-science/meet-the-team/leadership-team/jianhua-zhang.html) described the state-of-the-art computational infrastructure at the MD Anderson Cancer Center used for the analysis of the center's genomics pipelines, followed by a discussion of future challenges in genomics data storage, clinical algorithm adaptation, data mining and data visualization. + +[Dr. Hans Hofmann (UT Austin)](http://cichlid.biosci.utexas.edu/dr.-hans-hofmann.html) presented a global analytical framework for linking genotype information to phenotype information by addressing the biochemistry, cell biology and physiological aspects of an organism, charting the associated computational and analytical challenges. He noted that for personalized medicine approaches to succeed, we must increase our understanding of the causes and consequences of individual and population variation well beyond current genome-wide association and genotype variation studies. + +![2015-11-14 10.15.22_2](images/2015-11-14-10.15.22_2-1024x576.jpg) + +# **Session 2: Technology Providers Perspective** + +[Dr. Zaid Al-Ars (Bluebee)](http://nl.linkedin.com/pub/zaid-al-ars/1/183/95b) presented Bluebee’s platform to address the genome analysis challenge – an accelerated HPC-based private cloud solution to speedup processing of mass volumes of genomics data. The platform provides unrestricted scale-up and on-the-fly provisioning of computational and data storage capacity, along with industry-grade security and data integrity features.  Bluebee’s platform abstracts away the complexity of specialized HPC technologies such as hardware acceleration, offering an easy environment to deploy Bluebee as well as other OpenPOWER genomics technologies. + +[Dr. Yinhe Cheng (IBM)](https://www.linkedin.com/in/yinhe-cheng-085baba) discussed IBM's porting and optimization efforts around its [high performance infrastructure for genomics](http://www.ibm.com/common/ssi/cgi-bin/ssialias?subtype=WH&infotype=SA&htmlfid=POW03163USEN&attachment=POW03163USEN.PDF), including: + +- [BioBuilds](https://biobuilds.org/?cm_mc_uid=02671964985114431129520&cm_mc_sid_50200000=1450114271) a curated and versioned collection of Open Source bioinformatics tools for genomics, delivering 49 pre-built, POWER8 optimized bioinformatics application binaries +- Broad Best Practices pipeline (BWA/GATK) acceleration on POWER8 demonstrating 2x to 70x analysis speedup of various components of the pipeline - a collaborative effort among IBM, Xilinx and Bluebee +- Speedup of whole human genome analysis from days to less than half an hour using the [Edico Genome solution on Power](https://forums.xilinx.com/t5/Xcell-Daily-Blog/FPGA-based-Edico-Genome-Dragen-Accelerator-Card-for-IBM/ba-p/665850) + +![IMG_3741](images/IMG_3741-1024x768.jpg) + +# **Session 3: HPC Researchers Perspective** + +[Dr. Ravishankar Iyer (University of Illinois Urbana-Champaign)](https://www.ece.illinois.edu/directory/profile/rkiyer) presented research projects focused on improving the performance of cancer diagnostics pipelines, including a computational pipeline coded from scratch that executes significantly faster than current state-of-the-art pipelines. He also presented algorithms for health monitoring systems and wearable devices being integrated into a unified personalized medicine platform. + +[Dr. Jason Cong (UCLA)](http://vast.cs.ucla.edu/people/faculty/jason-cong) presented a Spark based approach enabling big data applications on scale-out, hybrid CPU and FPGA cluster architecture. The approach is being used to enable substantial performance increase for genomics computational pipelines such as those used for whole-genome and whole-exome sequencing experiments. + +[Dr. Wayne Luk (Imperial College London)](http://www.imperial.ac.uk/people/w.luk) presented a talk covering reconfigurable acceleration of genomics data processing and compression, demonstrating FPGA accelerated speedup of parts of RNA diagnostics pipelines used to identify cancer. To address large sizes of genomics datasets, his group implemented accelerated compression algorithms to speedup effective storage and management of DNA information. His continuing efforts are focused on optimization and speedup of transMART downstream DNA data analysis on IBM Power platforms. + +# **Challenges and Trends Panel Discussion** + +Four experts representing various users of genomics information and pipelines participated in a panel moderated by [Dr. Peter Hofstee (IBM)](https://www.linkedin.com/pub/peter-hofstee/b/886/6b4): + +- [Dr. Chris Webb (UT Austin Dell Medical School)](http://dellmedschool.utexas.edu/team-profile/chris-webb) +- [Phil Greer (University of Pittsburgh)](https://www.linkedin.com/in/phil-greer-a0994631) +- [Dr. John Zhang (MD Anderson)](http://www.mdanderson.org/education-and-research/departments-programs-and-labs/programs-centers-institutes/institute-for-applied-cancer-science/meet-the-team/leadership-team/jianhua-zhang.html) +- [Dr. Hans Hofmann (UT Austin)](http://cichlid.biosci.utexas.edu/dr.-hans-hofmann.html). + +Dr. Webb started the discussion, emphasizing that scientists and research groups working in isolation cannot answer the relevant questions in personalized medicine. Rather, close collaboration among multidisciplinary teams of doctors, geneticists, computer scientists and mathematicians is required to answer difficult questions and develop suitable models and efficient computational methods for use in a clinical environment. + +Mr. Greer pointed out that changes are needed to enable effective analysis of personalized medicine information. For example, the lack of unified approaches to documenting and storing patient medical records complicates linking the different sources of information relevant to personalize medical care. + +Answering a question from Dr. Hofstee about challenges in the growing field of population sequencing, Dr. Zhang identified the need to help doctors in making actionable decisions based on patient medical information. Dr. Hofmann commented that even common tasks such as data transmission are rapidly becoming a bottleneck due to the staggering sizes of population sequencing information. He further elaborated that standards are needed to ensure security and easy integration between the various genomics data types. + +The panel concluded that the community must address computational approaches that consider the inherent variations of the human genome and the different ways these variations play a role in the individual. This will provide doctors with the tools needed to identify levels of confidence associated with a specific therapeutic intervention. Such tools will play an important role in the medical revolution of personalized medicine. + +* * * + +**_About Zaid Al-Ars_** + +![zaid](images/zaid-150x150.jpg)Zaid Al-Ars is cofounder of [Bluebee](https://www.bluebee.com/), where he leads the development of the Bluebee genomics solutions. Zaid is also an assistant professor at the Computer Engineering Lab of Delft University of Technology, where he leads the research and education activities of the multi/many-core research theme of the lab. Zaid is involved in groundbreaking genomics research projects such as the optimized child cancer diagnostics pipeline with University Medical Center Utrecht and de novo DNA assembly research projects of novel organisms with Leiden University. diff --git a/content/blog/worlds-first-openpower-app-throwdown-showcases-five-strong-isv-innovations.md b/content/blog/worlds-first-openpower-app-throwdown-showcases-five-strong-isv-innovations.md new file mode 100644 index 0000000..bdbdd4b --- /dev/null +++ b/content/blog/worlds-first-openpower-app-throwdown-showcases-five-strong-isv-innovations.md @@ -0,0 +1,30 @@ +--- +title: "World’s First OpenPOWER App Throwdown Showcases Five Strong ISV Innovations" +date: "2014-10-06" +categories: + - "blogs" +--- + +By Terri Virnig, Vice President, Power Ecosystem and Strategy, IBM Systems & Technology Group + +Last year, IBM and the founders of the OpenPOWER Foundation shook up the server industry when they announced IBM’s POWER8 chip would be open for cross-industry development.  Fast forward one year and the organization has grown 12x, with 61 co-collaborators on board and counting.  The strong attraction has likely stemmed from a shared belief that openness is key to innovation – that no one company alone can own the innovation agenda for an entire industry. + +Independent software vendors (ISVs) historically have been quick to embrace openness, and this is no different.  ISVs are among the first to begin leveraging OpenPOWER’s development building blocks, including essential technical specifications and hundreds of thousands of firmware code.  As a result, ISVs are bringing forward several interesting new apps designed not just for IBM Power Systems running Linux, but also compliant with any future non-IBM, OpenPOWER based system or solution to come to market. + +To further support this momentum, we’re pleased to announce today the world’s first [OpenPOWER App Throwdown](http://ibmappthrowdown.tumblr.com/tagged/ibmenterprise) taking place at IBM’s Enterprise2014 conference in Las Vegas.   The contest builds upon the success of our Linux on Power App Throwdown, and will recognize some of the most innovative applications being developed to solve real business challenges. + +After reviewing 21 fantastic submissions, the competition has been narrowed down to five exceptional finalists. All finalists have built apps that leverage POWER’s Big Data capabilities in an open environment, solving problems across healthcare, retail and more with solutions that can tackle a variety of growing business challenges in new ways. And will be able to decide the winner, by first viewing the finalist’s [videos](http://ibmappthrowdown.tumblr.com/tagged/ibmenterprise), and then voting on Twitter with the hashtag [#IBMEnterpriseApp](https://twitter.com/hashtag/ibmenterpriseapp) along with the finalist’s Twitter handle (see list below). + +We want to thank all of the teams who submitted their contributions. Below are the finalists in the OpenPOWER App Throwdown: + +- [Information Builders](https://www.youtube.com/watch?v=MeLWH49p4dQ&feature=youtu.be) (@infobldrs) built WebFOCUS 8, a reporting application running on Linux on Power that evaluates performance of Power compared to x86 machines. The company also created OEM Workload for Power that shows which POWER8 architecture would fit best for customers based on workloads. + + +- [ARGOS Computer Systems](https://www.youtube.com/watch?v=ijkX1OeJrvs&feature=youtu.be) (@ARGOS\_Computers) runs its cognitive engine on Linux on POWER8 systems. The engine has demanding workloads, running cognitive agents in several virtual environments.  The agents make financial transactions, calls, purchases, and can even interact with Human Resources. POWER8 increases productivity of the engine by effectively doubling threads, powering more virtual agents. + + +- [Redis Labs](https://www.youtube.com/watch?v=Wh8cqzFpxCE&feature=youtu.be) (@RedisLabsInc) worked with IBM to port and optimize its open-source in-memory NoSQL database for flash.  The original Redis database took the world by storm, and by now porting the capability to the POWER8 platform, the solution has become signifcantly more cost efficient. The solution runs on POWER8 using CAPI flash, cutting deployment costs by 70 percent and achieving a 24-to-1 resource consolidation versus x86-based deployments. +- [Zato Health](https://www.youtube.com/watch?v=93IbgDbc5G0) (@zatohealth) delivers its Interoperability Platform via Power Systems on Linux, enabling proactive personalized medicine by accessing electronic health records across clinical and genomic data silos, data centers and organizations. It uses natural language processing to determine diagnostic criteria to better tailor treatment, identify opportunities for early intervention, and detect potential insurance savings qualifications. +- [Zend Technologies](https://www.youtube.com/watch?v=RmxAah-3cd8) (@zend) developed Zend Server, an application platform for PHP running on POWER8 which can significantly improve the performance of data analysis for a variety of applications.  One of its key features is Z-Ray, an analytics tool that evaluates application performance, giving insight into website data like event monitoring, database queries, execution and memory performance. Through the OpenPOWER App Throwdown, we can see firsthand how POWER’s open architecture is able to drive meaningful innovation.  No matter the winner, we are proud to work with all of these top-rate teams. + +So, now is the time to cast your vote with your social media voice.  Tweet #IBMEnterpriseApp and the Twitter handle of the ISV you feel most deserves to win.  The winner will be announced at the IBM Enterprise2014 ISV/MSP Mashup.  Then, be on the lookout for a live tweet from the OpenPOWER Twitter handle (@OpenPOWEROrg) announcing the first winner of the OpenPOWER App Throwdown! diff --git a/content/blog/xilinx-demonstrates-fpga-based-acceleration-technology-for-next-generation-data-centers-at-ibm-impact-2014.md b/content/blog/xilinx-demonstrates-fpga-based-acceleration-technology-for-next-generation-data-centers-at-ibm-impact-2014.md new file mode 100644 index 0000000..6040fc9 --- /dev/null +++ b/content/blog/xilinx-demonstrates-fpga-based-acceleration-technology-for-next-generation-data-centers-at-ibm-impact-2014.md @@ -0,0 +1,9 @@ +--- +title: "Xilinx Demonstrates FPGA-Based Acceleration Technology for Next-Generation Data Centers at IBM Impact 2014" +date: "2014-04-25" +categories: + - "press-releases" + - "blogs" +--- + +SAN JOSE, Calif., April 25, 2014 /PRNewswire/ -- Xilinx, Inc. (NASDAQ: [XLNX](http://studio-5.financialcontent.com/prnews?Page=Quote&Ticker=XLNX "XLNX")) today announced it will demonstrate the industry's first key value store acceleration demo based on the IBM CAPI protocol at the IBM Impact 2014 Conference.  As a member of the IBM OpenPOWER Foundation, Xilinx is delivering FPGA-based acceleration technologies for use in next-generation data centers and is among the growing open development community dedicated to accelerating innovation using IBM's POWER microprocessor. diff --git a/content/blog/xilinx-to-participate-in-the-inaugural-openpower-summit-2015-to-further-enable-collaborative-innovation-for-next-generation-data-centers.md b/content/blog/xilinx-to-participate-in-the-inaugural-openpower-summit-2015-to-further-enable-collaborative-innovation-for-next-generation-data-centers.md new file mode 100644 index 0000000..e1f76ee --- /dev/null +++ b/content/blog/xilinx-to-participate-in-the-inaugural-openpower-summit-2015-to-further-enable-collaborative-innovation-for-next-generation-data-centers.md @@ -0,0 +1,40 @@ +--- +title: "Xilinx to Participate in the Inaugural OpenPOWER Summit 2015 to Further Enable Collaborative Innovation for Next-Generation Data Centers" +date: "2015-03-12" +categories: + - "press-releases" + - "blogs" +tags: + - "featured" +--- + +SAN JOSE, Calif., March 12, 2015 /[PRNewswire](http://www.prnewswire.com/)/ -- Xilinx, Inc. (NASDAQ: XLNX) today announced it will participate in the inaugural OpenPOWER™ Summit 2015 to further enable collaborative innovation for next-generation data centers. As a member of the OpenPOWER Foundation, Xilinx is delivering FPGA-based acceleration technologies for high performance compute solutions and is among the growing open development community dedicated to accelerating innovation using IBM's POWER microprocessor. At the event, Xilinx joins a distinguished lineup of OpenPOWER Foundation keynote speakers, technical workgroup updates and member presentations. To learn more, visit Xilinx at the OpenPOWER SummitMarch 17 - 19, 2015 at the San Jose Convention Center, San Jose, CA. + +Logo - [http://photos.prnewswire.com/prnh/20020822/XLNXLOGO](http://photos.prnewswire.com/prnh/20020822/XLNXLOGO) + +**Xilinx Participation at OpenPOWER Summit 2015** + +**Wednesday, March 18 at 3:55PM** + +**_"Key-Value Store Acceleration with OpenPOWER_**_" by Michaela Blott, Senior Staff Research Engineer, Xilinx_ + +- This presentation discusses the architecture of an accelerated key-value store appliance which leverages a novel data-flow implementation of Memcached on an FPGA. The design achieves up to 36X in performance and power at response times in the microsecond range. Coherent integration of memory through IBM's Power8 CAPI interface allows both host memory and coherent-attached flash to be used as the value store. + +**Wednesday, March 18 at 6:30PM** + +**_"Data Center and Cloud Computing Market Landscape and Challenges_**" _by Manoj Roge, Director of Wired and Data Center Solutions, Xilinx_ + +- This presentation discusses data center and cloud computing market landscapes, examines technology challenges that limit scaling of cloud computing and delivers insights into how FPGAs combined with general purpose processors are transforming next-generation data centers with tremendous compute horsepower, low-latency and extreme power efficiency. + +**Technical Demonstrations in Xilinx Booth #913** + +- **Key Value Store Application Acceleration Solution** Xilinx is showcasing a Key Value Store (KVS) application acceleration demo leveraging the Alpha Data ADM-PCIE-7V3 board and OpenPOWER's coherent accelerator processor interface (CAPI). This demonstration is a broadly applicable KVS workload acceleration engine delivering performance/watt acceleration at lower latency. +- **Convey Computer Corporation OpenPOWER-based Acceleration Solution** Convey Computer Corporation will be showing its Eagle co-processor, a PCIe® form factor add-in card that utilizes Xilinx FPGAs to deliver application-specific acceleration for data-intensive applications. Eagle co-processors are IBM Power8 CAPI capable, and incorporate a Xilinx Virtex-7 X980T FPGA with four on-board SO-DIMMs for local data storage. OpenPOWER systems with Eagle co-processors provide an ideal solution for big data and high performance computing applications. + +**About Xilinx** + +Xilinx is the world's leading provider of All Programmable FPGAs, SoCs and 3D ICs. These industry-leading devices are coupled with a next-generation design environment and IP to serve a broad range of customer needs, from programmable logic to programmable systems integration. For more information, visit [www.xilinx.com](http://www.xilinx.com/). + +#1518 #AAB852 © Copyright 2015 Xilinx, Inc. Xilinx, the Xilinx logo, Artix, ISE, Kintex, Spartan, Virtex, Vivado, Zynq, and other designated brands included herein are trademarks of Xilinx in the United States and other countries. All other trademarks are the property of their respective owners. + +**Xilinx** Silvia E. Gianelli (408) 626-4328 [silvia.gianelli@xilinx.com](mailto:silvia.gianelli@xilinx.com) diff --git a/content/blog/xl-cc-and-gpu-programming-on-power-systems.md b/content/blog/xl-cc-and-gpu-programming-on-power-systems.md new file mode 100644 index 0000000..fd77873 --- /dev/null +++ b/content/blog/xl-cc-and-gpu-programming-on-power-systems.md @@ -0,0 +1,32 @@ +--- +title: "XL C/C++ and GPU Programming on Power Systems" +date: "2015-01-19" +categories: + - "blogs" +--- + +### Presentation Objective + +Provide information on the integration the nVidia Tesla GPU with IBM’s POWER8 processor and details on how to develop on this platform using nVidia’s software stack and the POWER8 compilers. + +### Abstract + +The OpenPOWER foundation is an organization with a mandate to enable member companies to customize the POWER CPU processors and system platforms for optimization and innovation for their business needs. One such customization is the integration of graphics processing unit (GPU) technology with the POWER processor. IBM has recently announced the IBM POWER S824L system, a data processing powerhouse that integrates the nVidia Tesla GPU with IBM's POWER8 processor. This joint presentation with nVidia and IBM will contain details of the S824L System, including an overview of the Tesla GPU and how it interoperates with the POWER8 processor. It will also describe the nVidia software stack and how it works with the POWER8 compilers. + +### Speaker Bio: + +Kelvin Li is an Advisory Software Developer at IBM Canada Lab in the compiler development area.  He has experiences in Fortran, C and C++ compiler development.  His interest is in parallel programming models and languages.  He is the IBM representative in OpenMP Architecture Review Board, a member of the language committee and the chair of the Fortran subcommittee. + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Li-Kelvin_OPFS2015_IBM_031315_final.pdf) + +### Presentation + + + + [Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Lin-Yonghua_OPFS2015_IBM_031315_final.pdf) + +[Back to Summit Details](javascript:history.back()) diff --git a/content/blog/xl-compilers-power9.md b/content/blog/xl-compilers-power9.md new file mode 100644 index 0000000..5d57a92 --- /dev/null +++ b/content/blog/xl-compilers-power9.md @@ -0,0 +1,64 @@ +--- +title: "OpenPOWER Summit: XL compilers support the latest POWER9 hardware" +date: "2018-05-11" +categories: + - "blogs" +tags: + - "openpower" + - "ibm" + - "openpower-summit" + - "openpower-foundation" + - "power9" + - "openpower-summit-2018" +--- + +_This blog post was originally [published by IBM here](https://www.ibm.com/developerworks/community/blogs/572f1638-121d-4788-8bbb-c4529577ba7d/entry/March_6_2018_at_10_54_54_AM?lang=en)._ + +The March 2018 OpenPOWER Summit in Las Vegas featured "[15 porting and tuning tools in 30 minutes](https://openpowerfoundation.org/summit-2018-03-us-agenda/)", where IBM's POWER9-supporting compilers were discussed. IBM's C, C++, and Fortran compilers support the latest POWER9 hardware [AC922](https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/ENUS117-111/index.html&lang=en&request_locale=en) and [S922](https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/ENUS118-021/index.html&request_locale=en) \- [download our no-charge full-function unlimited-production-use Community Edition today.](https://www.ibm.com/us-en/marketplace/xl-cpp-linux-compiler-power) + +Last December, there was another OpenPOWER summit in Beijing, a one-day event where OpenPOWER members exhibited and presented the latest technology solutions. The summit aimed to drive the development of the OpenPOWER ecosystem in China, speed up corresponding OpenPOWER roadmaps, demonstrate the cooperative innovation in the China market, and announce the first POWER9 server [AC922](https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/ENUS117-111/index.html&lang=en&request_locale=en) to Chinese customers. More than 400 customers, foundation members, developers, and ecosystem partners participated in the forum. + +There were three sub-forums: "OpenPOWER AI and Industry Solutions", "OpenPOWER Platform Software" and "OpenPOWER Hardware, Acceleration in Systems". In the "OpenPOWER Platform Software" sub-forum, **we discussed the new IBM XL compilers on POWER9**. Around 30 participants attended the session, some of whom were speakers from different software vendors. We introduced that **XL compilers fully exploit powerful hardware features of the POWER9 architecture and greatly support heterogeneous parallel programming with CUDA and OpenMP**; we also emphasized that **you can compile your open source software with XL since it [integrates Clang as the front end of the compiler](https://www.ibm.com/developerworks/community/blogs/572f1638-121d-4788-8bbb-c4529577ba7d/entry/What_XL_s_adoption_of_Clang_means_to_you?lang=en)**. At the other side of the conference hall, we had one booth in the exhibition area where we demonstrated the capability of the new POWER compilers with videos and flyers. + +\[caption id="attachment\_5430" align="aligncenter" width="625"\][![](images/Migrating-to-POWER9-1024x546.png)](https://openpowerfoundation.org/wp-content/uploads/2018/05/Migrating-to-POWER9.png) Figure 1. Suggestion on how to migrate your application to POWER9\[/caption\] + +\[caption id="attachment\_5431" align="aligncenter" width="625"\][![](images/POWER9-technology-exploitation-1024x614.png)](https://openpowerfoundation.org/wp-content/uploads/2018/05/POWER9-technology-exploitation.png) Figure 2. Exploit POWER9 technology using XL's POWER9 exploited scalar & vector built-in functions (BIFs)\[/caption\] + +\[caption id="attachment\_5432" align="aligncenter" width="625"\][![](images/POWER9-technology-exploitation-2-1024x536.png)](https://openpowerfoundation.org/wp-content/uploads/2018/05/POWER9-technology-exploitation-2.png) Figure 3. Exploit POWER9 technology using XL's high-performance math library tuned for POWER9 (MASS)\[/caption\] + +Read more about our POWER9 exploitation here: + +- C/C++: + - [POWER9 technology exploitation](https://www.ibm.com/support/knowledgecenter/SSXVZZ_13.1.6/com.ibm.xlcpp1316.lelinux.doc/proguide/p9_tech.html) + - [POWER9 compiler options](https://www.ibm.com/support/knowledgecenter/SSXVZZ_13.1.6/com.ibm.xlcpp1316.lelinux.doc/proguide/p9_opt.html) + - [POWER9 built\-in functions](https://www.ibm.com/support/knowledgecenter/SSXVZZ_13.1.6/com.ibm.xlcpp1316.lelinux.doc/proguide/p9_bif.html) + - [POWER9 MASS (An accelerated math library)](https://www.ibm.com/support/knowledgecenter/SSXVZZ_13.1.6/com.ibm.xlcpp1316.lelinux.doc/proguide/p9_lib.html) +- Fortran: + - [POWER9 technology exploitation](https://www.ibm.com/support/knowledgecenter/en/SSAT4T_15.1.6/com.ibm.xlf1516.lelinux.doc/proguide/p9_tech.html) + - [POWER9 compiler options](https://www.ibm.com/support/knowledgecenter/en/SSAT4T_15.1.6/com.ibm.xlf1516.lelinux.doc/proguide/p9_opt.html) + - [POWER9 intrinsic procedures](https://www.ibm.com/support/knowledgecenter/en/SSAT4T_15.1.6/com.ibm.xlf1516.lelinux.doc/proguide/p9_bif.html) + - [POWER9 MASS (An accelerated math library)](https://www.ibm.com/support/knowledgecenter/en/SSAT4T_15.1.6/com.ibm.xlf1516.lelinux.doc/proguide/p9_lib.html) + +With POWER9 technology exploitation, XL compilers can help you achieve the maximum return on your POWER investment. + +\[caption id="attachment\_5433" align="aligncenter" width="625"\][![](images/Compilers-1024x576.png)](https://openpowerfoundation.org/wp-content/uploads/2018/05/Compilers.png) Figure 4. Why IBM XL Compilers on POWER9? Up to 1.66x faster tonto benchmark vs GCC7\[/caption\] + +During the morning keynotes session, many impressive points were made in the speeches delivered by representatives from GCG, OpenPOWER Foundation, customers, and partners: + +- Chen Liming, Chairman of IBM Greater China, pointed out that **AI is becoming key to the future business success in the "big data explosion" era**. Through OpenPOWER, IBM has established a platform for technology exchange and cooperation in China for the joint development of OpenPOWER with Chinese enterprises. +- OpenPOWER customer Tencent emphasized that they acquired a number of OpenPOWER systems for their growing enterprise data center. **Last year, with the support of OpenPOWER technology, Tencent Cloud set four world records in the Sort Benchmark competition** and showed world-class performance in the field of big data analysis and application management. +- VP of Inspur Group mentioned that the **OpenPOWER system provides more efficient intelligent computing capabilities to meet the rapid development of Chinese enterprises**. Inspur will build on OpenPOWER's ecosystem to create a diversified infrastructure and will offer a range of OpenPOWER platforms around the areas of cloud computing, artificial intelligence, and big data. In September, Inspur and IBM announced that they would set up a joint venture to develop open Power server products that fit into the Chinese market. + +### We also attended some sessions in those three sub-forums and would like to share some exciting OpenPOWER projects: + +- **Inspur showed an upcoming dual OpenPOWER9 server prototype machine**. This product can support all NVMe SSD and has four P100 GPU cards to meet the requirements of smart computing. Inspur also announced that they would fully support PowerAI and showcased a PowerAI-based multi-target real-time tracking solution. Multi-target tracking is a type of video analysis technology widely used in security, smart city, transportation, and many other fields. It is also one of the hottest AI applications. The program is based on the Inspur P820 server. The Inspur P820 supports two OpenPOWER processors clocked at 3.4G, 64 memory slots and 12 PCI-E slots, and is a mature product already in the market. +- **Zilliz** **announced China's First GPU Hardware Acceleration OLAP Database Solution and GPU Database Appliance** **MEGAWISE** **based on the IBM POWER** **High Performance** **Server**. The system uses a Nvidia Tesla P100 processor for large-scale parallel data processing, and uses NVlink technology to achieve high-speed interconnection between the GPU and the CPU with 10x data query performance improvements, 10x lower hardware costs, and 20x lower operating costs. The technology can be widely used in banking, finance, telecommunications, energy, Internet of Things, medical and e-commerce and other fields. +- **AI related**: Big data and AI application vendor Cumulative Data showed its intelligent control fees cloud with chronic disease management in the whole cycle; Shanghai Flutter showed Industrial Appearance Defect Detection Solutions based on OpenPOWER depth learning; Tsinghua University doctoral students conducted facial expression detection pilot study on the Red Cloud CRH AI big data platform which is based on the Neu Cloud Oriental NL2822G-2 server + P100GPU. +- **POWER9-specific**: Gigabyte showed its OpenPOWER server based on P9 Sforza; Wistron demonstrated water cooling solutions with NVLink technology and two P9 full-size OpenCAPI / OCP designs; NEC from Japan demonstrated P9 HA and extended Ethernet solutions, etc. + +### **For more information about the new POWER9 hardware and how to purchase it, please see the following announcements:** + +- [AC922](https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/ENUS117-111/index.html&lang=en&request_locale=en), which includes GPUs, available Dec 22, 2017 +- [S922](https://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/ENUS118-021/index.html&request_locale=en), which does not include GPU, available Mar 20, 2018 + +**[Try IBM XL C/C++](https://www.ibm.com/us-en/marketplace/xl-cpp-linux-compiler-power) and [Fortran compilers](https://www.ibm.com/us-en/marketplace/xl-fortran-linux-compiler-power) for POWER9 free-of-charge**: our Community Edition is full-function and allows for unlimited production use. diff --git a/themes/openpowerfoundation/layouts/blog/list.html b/themes/openpowerfoundation/layouts/blog/list.html new file mode 100644 index 0000000..5cb0800 --- /dev/null +++ b/themes/openpowerfoundation/layouts/blog/list.html @@ -0,0 +1,48 @@ +{{ partial "header.html" . }} +{{ partial "navbar.html" . }} +
+
+
+
+
+
+

{{ .Title }}

+
+
+
+
 
+
{{ .Content }}
+
 
+
+
+
+ {{ range (.Paginate .Data.Pages.ByDate.Reverse 30).Pages }} + {{ $imagename := .Param "image" }} + {{ $imagelocation := (printf "%s/%s" "images/blog/" $imagename) }} + {{ $imageresource := resources.Get $imagelocation }} + + {{ end }} +
+
+
 
+
+ {{ template "_internal/pagination.html" . }} +
+
 
+
+
+
+
+{{ partial "footer.html" . }} diff --git a/themes/openpowerfoundation/layouts/blog/rss.xml b/themes/openpowerfoundation/layouts/blog/rss.xml new file mode 100644 index 0000000..cc8c7a5 --- /dev/null +++ b/themes/openpowerfoundation/layouts/blog/rss.xml @@ -0,0 +1,39 @@ +{{- $pctx := . -}} +{{- if .IsHome -}}{{ $pctx = .Site }}{{- end -}} +{{- $pages := slice -}} +{{- if or $.IsHome $.IsSection -}} +{{- $pages = $pctx.RegularPages -}} +{{- else -}} +{{- $pages = $pctx.Pages -}} +{{- end -}} +{{- $limit := .Site.Config.Services.RSS.Limit -}} +{{- if ge $limit 1 -}} +{{- $pages = $pages | first $limit -}} +{{- end -}} +{{- printf "" | safeHTML }} + + + {{ if eq .Title .Site.Title }}{{ .Site.Title }}{{ else }}{{ with .Title }}{{.}} on {{ end }}{{ .Site.Title }}{{ end }} + {{ .Permalink }} + Recent content {{ if ne .Title .Site.Title }}{{ with .Title }}in {{.}} {{ end }}{{ end }}on {{ .Site.Title }} + Hugo -- gohugo.io{{ with .Site.LanguageCode }} + {{.}}{{end}}{{ with .Site.Author.email }} + {{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}{{end}}{{ with .Site.Author.email }} + {{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}{{end}}{{ with .Site.Copyright }} + {{.}}{{end}}{{ if not .Date.IsZero }} + {{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}{{ end }} + {{- with .OutputFormats.Get "RSS" -}} + {{ printf "" .Permalink .MediaType | safeHTML }} + {{- end -}} + {{ range $pages }} + + {{ .Title }} + {{ .Permalink }} + {{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }} + {{ with .Site.Author.email }}{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}{{end}} + {{ .Permalink }} + {{ .Content | safeHTML }} + + {{ end }} + + diff --git a/themes/openpowerfoundation/layouts/blog/single.html b/themes/openpowerfoundation/layouts/blog/single.html new file mode 100644 index 0000000..73ca10b --- /dev/null +++ b/themes/openpowerfoundation/layouts/blog/single.html @@ -0,0 +1,50 @@ +{{ partial "header.html" . }} +{{ partial "navbar.html" . }} +
+
+
+
+

{{ .Title }}

+

Published on {{ .Date.Format "Monday 2 January 2006" }}

+
+
+
+
 
+
+ {{ .Content }} +
+
 
+
+
+{{ if .Params.tags }} +{{ $tags := .Params.tags }} +
+
+
+ {{ if not (eq (len $.Site.Taxonomies.tags) 0) }} + {{ $fontUnit := "rem" }} + {{ $largestFontSize := 3.5 }} + {{ $smallestFontSize := 0.5 }} + {{ $fontSpread := sub $largestFontSize $smallestFontSize }} + {{ $max := add (len (index $.Site.Taxonomies.tags.ByCount 0).Pages) 1 }} + {{ $min := len (index $.Site.Taxonomies.tags.ByCount.Reverse 0).Pages }} + {{ $spread := sub $max $min }} + {{ $fontStep := div $fontSpread $spread }} + {{ range $name, $taxonomy := $.Site.Taxonomies.tags }} + {{ $currentTagCount := len $taxonomy.Pages }} + {{ $currentFontSize := (add $smallestFontSize (mul (sub $currentTagCount $min) $fontStep) ) }} + {{ $count := len $taxonomy.Pages }} + {{ $weigth := div (sub (math.Log $count) (math.Log $min)) (sub (math.Log $max) (math.Log $min)) }} + {{ $currentFontSize := (add $smallestFontSize (mul (sub $largestFontSize $smallestFontSize) $weigth) ) }} + {{ range $tags }} + {{ if eq $name . }} +  {{ $name }}  + {{ end }} + {{ end }} + {{ end }} + {{ end }} +
+
+{{ end }} +
+{{ partial "footer.html" . }}