Compare commits

..

1 Commits

Binary file not shown.

Before

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

@ -12,7 +12,6 @@ pygmentsUseClasses = true
enableGitInfo = true
enableRobotsTXT = true
disableHugoGeneratorInject = false
enableInlineShortcodes = true
enableEmoji = true

[build]
@ -21,36 +20,20 @@ enableEmoji = true
writeStats = true

[outputs]
home = [ "HTML" , "RSS" ]
section = [ "HTML", "RSS" ]
page = [ "HTML" , "RSS" , "JSON" , "LDIF" ]
taxanomy = [ "HTML" , "JSON" , "RSS" ]
# term = [ "HTML" , "JSON" ]
home = [ "HTML" , "JSON" , "RSS" ]
section = [ "HTML", "JSON", "RSS" ]
page = [ "HTML" , "JSON" ]

[outputFormats]
[outputFormats.RSS]
mediatype = "application/rss"
baseName = "feed"
suffix = "xml"
isPlainText = false
notAlternative = false
[outputFormats.JSON]
mediaType = "application/json"
baseName = "index"
suffix = "json"
isPlainText = false
isPlainText = true
notAlternative = true
[outputFormats.LDIF]
name = "ldif"
mediaType = "text/ldif"
baseName = "index"
suffix = "ldif"
isPlainText = false
notAlternative = true

[mediaTypes]
[mediaTypes."text/ldif"]
suffixes = [ "ldif" ]

[markup]
[markup.goldmark]

@ -1,4 +1,5 @@
### navbar

[[navbar]]
name = "About"
identifier = "about"
@ -33,26 +34,31 @@
identifier = "steeringcommittee"
url = "/steeringcommittee"
weight = -600

[[navbar]]
name = "Events"
identifier = "events"
url = "/events/"
weight = -1900

[[navbar]]
name = "Working Groups"
identifier = "groups"
url = "/groups/"
weight = -1800

[[navbar]]
name = "Members"
identifier = "members"
url = "/members/"
weight = -1700

[[navbar]]
name = "HUB"
identifier = "hub"
url = "/hub/"
weight = -1600

[[navbar]]
name = "Technical"
identifier = "technical"
@ -82,14 +88,6 @@
identifier = "resources"
url = "/resources/"
weight= -600
[[navbar]]
name = "Blog"
weight = -100
url = "/blog/"
[[navbar]]
name = "Contact Us"
url = "/contact/"
weight = -10

### policy
[[policy]]
@ -138,34 +136,6 @@
pre = "far fa-file-pdf"
weight = -1600

###
[[code]]
name = "Git"
pre = "fas fa-code-branch"
url = "https://git.openpower.foundation/"
[[code]]
name = "GitHub"
pre = "fab fa-github"
url = "https://github.com/OpenPOWERFoundation"
[[code]]
name = "GitLab"
pre = "fab fa-gitlab"
url = "https://gitlab.com/OpenPOWERFoundation"

###
[[discuss]]
name = "Discuss"
pre = "fas fa-comments"
url = "https://discuss.openpower.foundation/categories"
[[discuss]]
name = "Chat"
pre = "fas fa-comment-dots"
url = "https://chat.openpower.foundation"
[[discuss]]
name = "Slack"
pre = "fab fa-slack"
url = "https://join.slack.com/t/openpowerfoundation/shared_invite/zt-9l4fabj6-C55eMvBqAPTbzlDS1b7bzQ"

### social
[[social]]
name = "Twitter"

@ -3,7 +3,7 @@
URI = "stats.vantosh.com"
ID = "69"
[forms.contact]
URI = "https://webscripts.vantosh.com/forms/contactus/prod/opf"
URI = "https://webscripts.vantosh.com/forms/contactus/prod/opfm"
[forms.hub]
URI = "https://webscripts.vantosh.com/forms/hub/prod/opf"
[forms.passport]

@ -3,12 +3,8 @@ title: Home
promo:
header: OpenPOWER Foundation
p:
- Open Developer Community for the POWER Architecture
- <i>“The Most Open and High-Performance Processor Architecture and Ecosystem in the Industry”</i>
- Create the Future with POWER
calltoaction:
- title: Join us
link: /join/
calltoaction: Join us
image: promobg.png
articles:
- header: Open Innovation
@ -46,9 +42,6 @@ sections:
- title: Systems
image: systems.jpg
link: /tags/systems
buttons:
- title: Working Groups
link: /groups/
dark:
- With its open ecosystem approach, active participation from its global membership base and powerful foundation of the POWER ISA, the OpenPOWER Foundation is the premiere organization to facilitate truly effective collaboration and drive meaningful, accessible innovation across the open hardware industry.
subscribe:

@ -1,30 +0,0 @@
---
title: "Changing the Game: Accelerating Applications and Improving Performance For Greater Data Center Efficiency"
date: "2015-01-16"
categories:
- "blogs"
---

### Abstract

Planning for exascale, accelerating time to discovery and extracting results from massive data sets requires organizations to continually seek faster and more efficient solutions to provision I/O and accelerate applications.  New burst buffer technologies are being introduced to address the long-standing challenges associated with the overprovisioning of storage by decoupling I/O performance from capacity. Some of these solutions allow large datasets to be moved out of HDD storage and into memory quickly and efficiently. Then, data can be moved back to HDD storage once processing is complete much more efficiently with unique algorithms that align small and large writes into streams, thus enabling users to implement the largest, most economical HDDs to hold capacity.

This type of approach can significantly reduce power consumption, increase data center density and lower system cost. It can also boost data center efficiency by reducing hardware, power, floor space and the number of components to manage and maintain. Providing massive application acceleration can also greatly increase compute ROI by returning wasted processing cycles to compute that were previously managing storage activities or waiting for I/O from spinning disk.

This session will explain how the latest burst buffer cache and I/O accelerator applications can enable organizations to separate the provisioning of peak and sustained performance requirements with up to 70 percent greater operational efficiency and cost savings than utilizing exclusively disk-based parallel file systems via a non-vendor-captive software-based approach.

### Speaker Bio

[Jeff Sisilli](https://www.linkedin.com/profile/view?id=5907154&authType=NAME_SEARCH&authToken=pSpl&locale=en_US&srchid=32272301421438011111&srchindex=1&srchtotal=1&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438011111%2CVSRPtargetId%3A5907154%2CVSRPcmpt%3Aprimary), senior director of product marketing at DataDirect Networks, has over 12 years experience creating and driving enterprise hardware, software and professional services offerings and effectively bringing them to market. Jeff is often quoted in storage industry publications for his expertise in software-defined storage and moving beyond traditional approaches to decouple performance from capacity.

### Speaker Organization

DataDirect Networks

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Sisilli_OPFS2015_031815.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Sisilli_OPFS2015_031815.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,91 +0,0 @@
---
title: "2018 OpenPOWER/CAPI and OpenCAPI Heterogeneous Computing Design Contest"
date: "2018-07-27"
categories:
- "press-releases"
- "blogs"
---

\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern"\]\[vc\_column\]\[vc\_column\_text css=".vc\_custom\_1538078932412{margin-bottom: 20px !important;}"\]

# Build Your Own Super Processor

\[/vc\_column\_text\]\[vc\_column\_text\]Organized by IBM China, IPS (Inspur Power Commercial Systems), The OpenPOWER Foundation, the OpenCAPI Consortium and Fudan University Microelectronics College, 2018 CAPI/OpenCAPI heterogeneous computing design contest begins July 6th.

The objective of the contest is to encourage universities and scientific research institutions to understand the advanced technology of FPGA heterogeneous computing on the OpenPOWER system and prepare applications for technological innovation. The participants will have the opportunity to cooperate with members of the OpenPOWER Foundation, the OpenCAPI Consortium to develop  prototypes on a OpenPOWER platform while receiving technical guidance from from sponsor companies experts.

The contest is sponsored by OpenPOWER Foundation members Shenzhen Semptian data Limited., Mellanox Technologies, Nallatech (a Molex company) and Xilinx, Inc,\[/vc\_column\_text\]\[vc\_column\_text css=".vc\_custom\_1538077233210{margin-top: 20px !important;}"\]

## Background

Heterogeneous Computing is a system that uses more than one processor. This multi-core system not only enhances the performance of the processor core, but also incorporates specialized processing capabilities, such as GPU or FPGA, to work on specific tasks.

In recent years, as the silicon chip approaches physical and economic cost limits, Moores law is dead.  The rapid development of the Internet, the explosive growth of information and the popularization of AI technology have highly increased the demand for computing power. Heterogeneous computing, the focus is not only limited to the improvement of CPU performance, but to break the bottleneck of data transmission between CPU and peripherals, and to allow more hardware devices to participate in computing, such as using dedicated hardware to be responsible for intensive computing or peripherals management, which can significantly improve the performance of the whole system. There is no doubt that heterogeneous computing is the main direction of improving computing power.

Participants in the OpenCAPI heterogeneous computing design competition can achieve insight by utilizing and optimizing the most advanced technology available through OpenPOWER architecture. This competition will provide an opportunity to create breakthrough technologies and for enterprise and research workloads.\[/vc\_column\_text\]\[vc\_column\_text css=".vc\_custom\_1538077241426{margin-top: 20px !important;}"\]

## Contest Rules

The contest will begin on July 6th with submissions due by Nov. 23rd.   The winner will be announced publically at the OpenPOWER China Summit 2018 in December in Beijing.  Announcement date yet to be determined

In preliminaries, participants will submit a solution proposal of a FPGA accelerator based on CAPI/OpenCAPI technology on OpenPOWER systems. The accelerator can serve any workload that requires high computing power or big data transaction bandwidth.  Ten winners of preliminaries will be selected and awarded funds to support them moving on to the final.

In final, participants will develop a prototype of their solution proposed in real development environment. Sponsors will provide them with OpenPOWER systems + CAPI/OpenCAPI enabled FPGA cards as well as technical expects that will provide coding and debugging skills for the CAPI development framework.\[/vc\_column\_text\]\[vc\_column\_text css=".vc\_custom\_1538077250723{margin-top: 20px !important;}"\]

## Timeline

\[/vc\_column\_text\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538077211372{margin-top: 20px !important;background-color: #007aad !important;}"\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]

### Schedule

\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]

### Time

\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]

### Content

\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]

#### Preliminary

\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]7/6-8/15\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Enroll and submit proposal\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]8/16-8/26\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Expert Review\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538076119836{padding-bottom: 1px !important;}"\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]8/27\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Announce Top 10 for Final\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]

#### Final

\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]8/27-11/23\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Prototype development and submission for final\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\]11/24-11/29\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Expert Review\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538076138264{padding-bottom: 2px !important;}"\]\[vc\_column\_inner width="1/5"\]\[/vc\_column\_inner\]\[vc\_column\_inner width="1/5"\]\[vc\_column\_text\](TBD)\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="3/5"\]\[vc\_column\_text\]Final Thesis Oral Defense and Award Ceremony\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538077273553{margin-top: 20px !important;}"\]\[vc\_column\_inner\]\[vc\_column\_text\]

## Audiences and Enroll

College students from China universities and research institutes, who are interested in the CAPI/OpenCAPI technology are welcome to join.  They are also welcome to join the OpenPOWER Foundation at the Associate or Academic Level for free ([https://openpowerfoundation.org/membership-2/levels/](http://openpowerforum.wpengine.com/membership-2/levels/))

Click [More information](https://mp.weixin.qq.com/s?__biz=MjM5MDk3Mjk0MQ==&mid=509982703&idx=1&sn=48ee68fbdd54b1437e78a1d9c2285864&chksm=3d2dba8d0a5a339baf271ed5cedf51d8f29c097e488244623bac2d6f61121c3292fc45de56a4&scene=18&key=1d3ba184c3454c150135581fb2c6d4fd1a55a420799f8) to get to know more of the contest.

Click [Enroll](http://dsgapp.cn.edst.ibm.com/bps/OpenCAPI/index.html?lectureId=1&project_id=2) for enrollment\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[/vc\_column\]\[/vc\_row\]\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern" css=".vc\_custom\_1538077266597{margin-top: 20px !important;}" z\_index=""\]\[vc\_column\]\[vc\_column\_text\]

## Messages from Organizers and Sponsors

\[/vc\_column\_text\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation="" css=".vc\_custom\_1538077331586{margin-top: 16px !important;}"\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5636" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076393779{margin-top: 16px !important;}"\]Waiming Wu, General Manager, IBM OpenPOWER China\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]With the ever increasing demand for computing power today, OpenPOWER based on IBM POWER processor and Linux technology has attracted more and more attention from customers, developers and business partners. OpenPOWER systems, with its excellent computing and processing capabilities are ideal for AI, big data and cloud platforms. The OpenCAPI technology used in OpenPOWER systems support heterogeneous computing such that innovation in accelerators could be quickly integrated with POWER processor to provide the next level of computing performance. The new concept of heterogeneous computing based on collaboration between CPU and accelerators heralds a new computing era.

We are pleased to see the announcement and roll out of “The OpenCAPI + OpenPOWER Heterogeneous Computing Contest” for universities and research institutions. The OpenPOWER Foundation & OpenCAPI Consortium, Fudan University and many members of OpenPOWER actively support this activity. This is the best demonstration of the support from academic and corporate community in technological innovation. In IBM we will also do our best to co-organize this event and to contribute developing talents and innovative solutions.

We are also grateful to the technical experts at IBM China System Lab. Under this Contest, they will share the leading technology to the competing teams through in-depth technology seminars, carefully prepared technical documents and the upcoming one-to-one expert support, and of course great technical mentorship.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5637" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076518297{margin-top: 16px !important;}"\]Hugh Blemings, Executive Director, OpenPOWER Foundation\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]At the OpenPOWER Foundation were delighted to see our members like Mellanox, Nallatech, Semptian, Xilinx and of course IBM working together in the “OpenCAPI + OpenPOWER Contest”. CAPI/OpenCAPI is a key part of the great Open system that OpenPOWER represents and a leading high speed interconnect for Accelerators and Interconnects alike.

Our Members, working with some great universities and research institutions in China will provide both an opportunity for people to learn about CAPI/OpenCAPI and to see solutions to real world problems solved faster using innovative OpenPOWER hardware and software.

Were looking forward to seeing what innovative ideas the contestants come up with and, of course, congratulating the winners at the OpenPOWER Summit in Beijing in December. We wish all involved the very best!\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5639" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076676628{margin-top: 16px !important;}"\]Yujing Jiang, Product and Marketing Director, Inspur Power Commercial Systems Co., Ltd\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]Inspur Power Commercial Systems Co., Ltd. is a platinum member of the OpenPOWER Foundation, committed to co-build an open OpenPOWER ecosystem, developing servers based on open Power technology, improving server ecosystems, building a sustainable server business and providing users with advanced, differentiated and diverse computing platforms and solutions. Inspur Power Systems insist on openness and integration for continuous development of heterogeneous computing architecture based on CAPI. CAPI heterogeneous computing breaks the computing walls, enhances massive parallel data processing capabilities and provides more effective and powerful data resources for image and video, deep learning and database. CAPI heterogeneous computing also provides extremely high data transmission bandwidth, defines a more flexible data storage method, and greatly improves server IO capabilities.

Inspur Power Systems will provide OpenPOWER based datacenter server FP5280G2 as the platform for the contest to verify and test the works. It is the first P9 platform in China, designed for cloud computing, big data, deep learning optimization. It is optimized in performance, extension and storage. The standalone FP5280G2 provides the leading PCIe Gen4 (16Gbps) channel in the industry, and supports CAPI 2.0. We wish this new system will bring an effective support to the contest. And in the future, we can build more systems to enhance heterogeneous computing with more interconnection through CAPI technology between CPU to memory, network, I/O device etc and be widely applied to industry market.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5640" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076774108{margin-top: 16px !important;}"\]Yibo Fan, Associate Professor, Fudan University Microelectronics College, China\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]CAPI/OpenCAPI is an unique technology in OpenPOWER system, which provides a superior operating environment for FPGA heterogeneous computing design, especially eliminating the driver development process and providing the most convenient method for rapid chip IP prototype verification and the deployment of heterogeneous systems. Based on CAPI technology, our team launched a CAPI running example of open source H.265 video encoder. Through the technical cooperation in the project, we fully realized the innovative value of CAPI technology for heterogeneous computing. Hopefully by hosting this contest, we can contact more excellent teams and talents who study and master CAPI technology in peer universities, and promote CAPI/OpenCAPI technology further to universities and industries.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5641" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076852620{margin-top: 16px !important;}"\]Qingchun Song, Mellanox Technologies, Asia & Pacific Marketing Director\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]As a member of OpenPOWER, Mellanox is pleased to be involved in the optimization of OpenCAPI. As a provider of intelligent end-to-end network products, Mellanox has always worked closely with x86 and POWER processor platforms. Mellanox intelligent network products has always been the best choice for the POWER platform.

In June 2018, Summit Supercomputer from Oak Ridge National Laboratory, US, was released at the International Supercomputing Conference in Frankfurt, Germany, It use the POWER CPU plus Mellanoxs InfiniBand network and it is now the fastest supercomputer and artificial intelligence computer in the world.

Mellanox network products currently support 100 GHz per second. Products with a speed of 200 GHz per end will be released into market in the next quarter. Better network speed requires the support of faster internal buses, and high speed OpenCAPI and 200G network products are an excellent match.

I hope that this OpenCAPI optimization contest can efficiently improve the performance of CAPI, and can realize the integration with RDMA technology, which truly realize the matching of internal network buses and external buses and help the next-generation data center. Finally, I wish the contest going smoothly, thank you.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5642" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538076930931{margin-top: 16px !important;}"\]Hao Li, General Manager, Semptian Data Co., Ltd\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]Many thanks to the organizers of the event for inviting Semptian to participate in 2018 OpenCAPI Heterogeneous Computing Design Contest. In recent years, the way of relying solely on CPU to improve computing performance has come to an end. At the same time, various applications, which are emerging rapidly, raise higher demand on computing ability and constantly challenge the performance limit. It has become a consensus in industry that through heterogeneous computing we can break the bottleneck of computing and data transmission.

As a senior corporation which has more than ten years of experience in the field of FPGA developing, Semptian believes that with the help of FPGAs advantage of high performance, low power consumption, flexibility and ease of use, combined with the special technical advantage of CAPI technology in OpenPOWER systems, processing specific computing through FPGA + CAPI + CPU is the best way to optimize computing performance, reduce acquisition and operation costs and meet the requirements of applications and power consumptions.

We are very glad to participate in this contest together with other members of the OpenPOWER alliance to expand the development of the alliances ecosystem. Hopefully through this contest, we can explore more application scenarios, like artificial intelligence inference, image and video acceleration and gene computing acceleration, to expand the application of heterogeneous computing.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[vc\_separator type="normal" thickness="1" up="16" down="16"\]\[vc\_row\_inner row\_type="row" type="full\_width" text\_align="left" css\_animation=""\]\[vc\_column\_inner width="1/6"\]\[vc\_single\_image image="5643" img\_size="full" qode\_css\_animation=""\]\[vc\_column\_text css=".vc\_custom\_1538077009696{margin-top: 16px !important;}"\]Fan Kui, Account Sales Manager Nallatech\[/vc\_column\_text\]\[/vc\_column\_inner\]\[vc\_column\_inner width="5/6"\]\[vc\_column\_text\]Nallatech and IBM have worked closely through the OpenPOWER Foundation to enable heterogeneous computing by way of CAPI1.0 & CAPI2.0 based FPGA Accelerators. Nallatechs 250S FPGA Accelerator supports CAPI1.0 and the 250S+ supports CAPI2.0. Additionally the OpenPOWER Accelerator Workgroups ”CAPI SNAP Acceleration Framework”, is also supported on these cards. CAPI SNAP eases the development of Accelerator Function Units, AFUs, within the FPGA in OpenPOWER systems. As you may very well know, FPGA computing is one of the leading technologies in the development of AI and Deep learning and is one of the most exciting advancements that will affect in how we live our lives.

We are all proud to sponsor such an aspirational and academic event with students of China that will boast amazing innovations in FPGA technology for future generations to come. Thank you for the opportunity of sponsoring your event. We wish you great fortune in this contest, as well as your career in FPGA Acceleration.\[/vc\_column\_text\]\[/vc\_column\_inner\]\[/vc\_row\_inner\]\[/vc\_column\]\[/vc\_row\]\[vc\_row css\_animation="" row\_type="row" use\_row\_as\_full\_screen\_section="no" type="full\_width" angled\_section="no" text\_align="left" background\_image\_as\_pattern="without\_pattern"\]\[vc\_column\]\[vc\_empty\_space\]\[/vc\_column\]\[/vc\_row\]

File diff suppressed because one or more lines are too long

@ -1,188 +0,0 @@
---
title: "2019 OpenPOWER + OpenCAPI Heterogeneous Computing Design Contest"
date: "2019-09-24"
categories:
- "blogs"
tags:
- "openpower"
- "openpower-foundation"
- "opencapi"
- "opencapi-contest"
---

After the success of the 2018 OpenPOWER/CAPI and OpenCAPI Heterogeneous Computing Design Contest, we're excited to see its return in 2019! Groups from research institutions or universities in China are welcome to apply. You can find more information on the contest from our OpenPOWER ecosystem friends in China below. Good luck to all of the participants!

![](images/KV-English-1024x556.jpg)

# 2019 OpenPOWER + OpenCAPI异构计算大赛

人工智能、物联网、深度学习、人脸识别、无人驾驶……

耳熟能详的词汇背后,隐藏着怎样的技术?

丰富的应用、便捷的生活

身处全民数字化时代的你,是否想过

是什么在支持着我们?

在这一切的背后都离不开大量提供强劲计算能力的服务器以及被日益关注的异构计算。

在OpenPOWER服务器系统上实现异构计算利用CAPI接口连接FPGA设计硬件加速器可以显著提升系统性能

打破计算和数据传输的瓶颈,降低机器的购置和运维成本,实现异构计算的各种可能。

回顾2018 OpenPOWER/CAPI + OpenCAPI异构计算大赛

来自17所高校的27支代表队伍报名参加比赛

经过3个月的实际开发、调试、测试和调优

成功开发出基于CAPI/OpenCAPI的设计原型实践异构计算。

他们出色的学习及开发能力让我们相信他们可以逐渐成长为科技创新的中坚力量!

而今年,

打破藩篱,引领加速,

你准备好了吗?

## 大赛介绍

2019 OpenPOWER + OpenCAPI异构计算大赛由OpenPOWER基金会、OpenCAPI联盟主办IBM中国承办浪潮商用机器有限公司协办多家OpenPOWER基金会成员支持旨在鼓励大学和科研机构了解和实践异构计算利用OpenPOWER系统上FPGA异构计算的先进技术开拓视野、积极创新、加速推动科技创新实际应用。

 

参赛者将有机会与OpenPOWER基金会多家会员合作在先进的OpenPOWER系统平台上实践开发感受专业领域的开发环境和方法学并获得企业导师一对一技术指导。获奖学生除了获取奖金之外还有机会成为IBM的实习生以及工作优先录取的机会

 

另外OpenPOWER基金会也欢迎高校加入成为学术/协会成员(无入会费用,详见:[https://openpowerfoundation.org/membership/levels/](https://openpowerfoundation.org/membership/levels/))。

 

**长按扫码报名及提交您的初赛方案**

报名及方案提交开放时间2019.9.24-2019.10.25

 

## 大赛主体单位

【主办单位】

OpenPOWER基金会

OpenCAPI联盟

【承办单位】

IBM中国

【协办单位】

浪潮商用机器有限公司

【合作单位】

Alpha Data

联捷科技CT-Accel

北京迈络思科技有限公司Mellanox

赛灵思电子科技上海有限公司Xilinx

## 竞赛背景

异构计算Heterogeneous Computing是指使用一种以上处理器的系统。这种多核心的系统不仅通过增加处理器内核提升性能还纳入专门的处理能力例如GPU或FPGA来应对特定的任务。

近年来随着硅芯片逼近物理和经济成本上的极限摩尔定律已趋近失效。但与之相对的却是互联网的蓬勃发展、信息量爆炸式增长以及AI技术研究和应用普及都对计算能力的要求变的更高。而异构计算将关注点不仅局限在CPU性能的提升而是打破CPU和外围设备间数据传输的瓶颈让更多的硬件设备参与计算如用专用硬件完成密集计算或者外设管理等从而显著提高系统性能。毫无疑问异构计算是提高计算力的主流方向。

参加OpenCAPI异构计算设计大赛不仅可以了解当今处理器和系统硬件上最领先的技术更可以成为把您的聪明才智孵化成某项突破性研究或应用的起点。

## 竞赛对象

参赛对象为国内任何对大赛有兴趣的大学或研究机构。大赛以学校为单位组织报名,比赛形式为团体赛。具体要求如下:

- 每支队伍由一名以上学生及一位指导老师组成。指导老师是参赛队所属高校的正式教师,一位老师可以指导多支参赛队
- 允许一个学校有多只代表队
- 报名时应具备在校学籍
- 参赛队员应保证报名信息准确有效

## 竞赛奖励

初赛入围的10支参赛队将进入复赛。复赛设立一、二、三等奖及鼓励奖。奖金如下税前金额

一等奖   1支团队  奖金人民币2.5万元

二等奖   1支团队  奖金人民币2万元

三等奖   1支团队  奖金人民币 1.5万元

鼓励奖   进入复赛的其他7支队伍 奖金人民币5千元

## 赛程和赛制

本次竞赛分初赛和复赛两个阶段。初赛采用网上评审方式,复赛采用公开项目答辩的评审方式。 赛程安排如下:

 

<table><tbody><tr><td width="88">赛程</td><td width="205">时间</td><td width="293">内容</td></tr><tr><td rowspan="3" width="88">初赛</td><td width="205">9/24-10/25</td><td width="293">初赛方案设计及提交</td></tr><tr><td width="205">10/26-11/06</td><td width="293">初赛专家评审</td></tr><tr><td width="205">11/07</td><td width="293">公布复赛入围的10支团队的名单</td></tr><tr><td rowspan="3" width="88">复赛</td><td width="205">11/08-03/06/2020</td><td width="293">复赛作品开发及提交</td></tr><tr><td width="205">03/07/2020-03/14/2020</td><td width="293">复赛专家评审</td></tr><tr><td width="205">03/18/2020</td><td width="293">复赛答辩及颁奖典礼</td></tr></tbody></table>

 

 

**初赛:**参赛队选择可被加速的应用场景,构思系统设计。提出具有创新想法的设计方案。

以下几类供参考,并无限制:

- 解决计算能力瓶颈:大规模并行数据处理能力可以应用于神经网络,图像视频,密码学,网络安全,数据库、以及广泛领域中的数据计算(金融,地质,生物、材料、物理等)。
- 解决数据传输瓶颈超高的数据传输带宽可以应用于网络传输定义更灵活的数据存储方式并且利用FPGA在数据传输过程中顺便进行数据处理极大地减轻服务器端的CPU压力。

IBM资深专家指导参赛团队结合研究领域选择应用场景。各团队构思系统设计进行可行性分析划分算法流程软硬件分配估算带宽计算密度和效率。在这一阶段只需以书面报告形式提交方案构想即提交架构设计和性能预测分析报告。


**复赛:**参赛队和IBM资深专家一起审阅系统设计并进入具体开发阶段。

 

- 开发环境为主办单位和合作单位提供包括OpenPOWER服务器和支持CAPI接口的FPGA板卡搭建的远程环境。主要工作包括软件/硬件开发、调试、记录和分析测试结果。
- 具体开发过程中,企业导师一对一辅导,协助参赛者把设计实现成原型。复赛作品要求以论文形式提交原型开发报告和分析测试结果。

 

详细的提交内容以及方式,将在后续的竞赛过程中发布,以大赛主办方发布的最新内容为准。

## 更多详情

**CAPI和OpenCAPI**

CAPI的全称是Coherent Acceleration Processor Interface它是允许外部设备I/O Device和处理器CPU共享内存的接口技术。以FPGA为例作为现场可编程门阵列硬件它有令人惊叹的并行处理能力并完全可以自由定制但它连在系统中时仍然是个外部设备。它要参与到异构计算中和CPU协同工作不能共享内存怎么行呢从技术上看用CAPI接口连接FPGA作为异构计算平台有以下好处

- 它是带一致性的加速接口FPGA可以直接像CPU一样直接访问内存。避免软硬件协同设计中的地址转换操作大大简化编程思路进而降低研发开销缩短开发周期。
- 主机端程序完全工作在用户态无须编写PCIE设备驱动程序。
- FPGA作为I/O设备和主机通讯的延时更短。
- 在FPGA处理能力增加的场景下带宽瓶颈日益凸显。它是业内最领先的PCIE Gen4 (16Gbps) 和OpenCAPI (25Gbps) 通道,妥妥的大带宽!
- OpenCAPI还支持I/O通道的内存扩展由此探索存储级内存SCM对大数据应用的加速。

OpenCAPI是独立的标准化组织[www.opencapi.org](http://www.opencapi.org)它将新一代CAPI技术规范开放出来致力于推动高速硬件接口设计全面进入带内存一致性的时代顺应异构计算的潮流并为之提供了坚实的技术支撑。OpenCAPI首先在Power9发布搭载Power9和OpenPOWER9服务器但它的设计特性并没有绑定在Power架构上完全可以嵌入其它种类的处理器架构。

 

**Power Systems和OpenPOWER**

 

在全球众多最大型的集群中,都能看到 Power Systems 高性能计算服务器的身影。Power Enterprise 服务器专为数据设计,可为企业实现终极的弹性、可用性、安全性等性能,被广泛应用于银行、政府、航空、能源等企业的核心业务中,为要求苛刻的工作负载(例如,基因、金融、计算化学、石油和天然气勘探以及高性能数据分析)提供极致。

 

2013年IBM开放Power服务器架构成立OpenPOWER基金会(https://openpowerfoundation.org/)目前已经有来自34个国家和地区的340多家公司加入核心会员有IBM、Google、Nvidia、Redhat、CanonicalUbuntu、Hitachi、浪潮、Wistron等共同建设开放的OpenPOWER生态。对比传统Power系统基于 Linux 的OpenPOWER系统主要由联盟成员设计生产价格优势明显同时也能够实现出色的性能和投资回报率适用于计算密集型和数据密集型应用。这些服务器提供您所需的灵活性能够快速集成创新技术解决方案避免被供应商的专有技术所“套牢”并加速实现业务结果。

 

2018年初IBM 宣布推出POWER9处理器。全新POWER9芯片为计算密集型人工智能工作负载而设计是首批嵌入PCI-Express 4.0、新一代NVIDIA NVLink及OpenCAPI的处理器基于该处理器的系统可以大幅提升Chainer、TensorFlow及Caffe等各大人工智能框架的性能并加速Kinetica等数据库。提供超越过往所有设计的高速信号总线带宽。如此一来数据科学家能够实现以更快的速度构建包括科研范畴的深度学习洞察、实时欺诈检测和信用风险分析等范围的应用。POWER9是美国能源部Summit及Sierra超级计算机的核心这两台超级计算机是当今世界上性能最强的数据密集型超级计算机。

@ -1,19 +0,0 @@
---
title: Virginia Tech opens up MIPS and POWER based Computer Architecture Curriculum
categories:
- blogs
tags:
- openpower
- openpower-foundation
- linux-foundation
- open-source
- Virginia Tech
- Curriculum
- Computer Architecture
date: 2022-09-14
draft: true
---

Today, we are pleased to announce that MIPS and POWER based computer architecture curriculum developed by Dr. Wu Feng of Virginia Tech has been released publicly as open source.

https://github.com/w-feng/CompArch-MIPS-POWER

@ -1,9 +0,0 @@
---
title: Blogs
outputs:
- html
- rss
- json
date: 2022-01-31
draft: false
---

@ -1,41 +0,0 @@
---
title: "A Better Way to Compress Big Data"
date: "2018-03-08"
categories:
- "blogs"
tags:
- "openpower"
- "center-for-genome-research-and-biocomputing"
- "oregon-state-university"
- "ibm-power-systems"
---

## **Wasting CPU hours on compression**

The Center for Genome Research and Biocomputing (CGRB) has a large computing resource that supports researchers at Oregon State University by providing processing power, file storage service and more. This computational resource is also used to capture all data generated from the CGRB Core Laboratory facility that processes biological samples used in High Throughput Sequencing (HTS) and other data rich tools.

Currently, the CGRB Core Lab generates between 4TB and 8TB of data per day which directly lands on the biocomputing resource and is made immediately available to researchers.  Because of this, the CGRB has over 4PB of usable space within our biocomputing facility and continues to add space monthly. Since individual labs must purchase file space needed to accomplish their research, there is always pressure from the lab managers to have users clean up and reduce space allowing for new experiments to be done without the need to purchase more space. This process leads to many users taking CPU time to compress data needed for later use but limiting the labs current available space. Since we like to use processing machines for processing data and not just compressing, we needed to find a solution allowing GZIP work to be done without tying up our CPU hours.

## **More computing, faster**

To reduce loads on the processing machines and computational time devoted to compressing data, we started considering FPGA cards.

Specifically, we evaluated offloading compression processes directly onto a peripheral FPGA card. Offloading compression would increase our output and help manage file space usage so groups do not have to purchase more space to start new experiments.

The new IBM Power Systems POWER8 machines include an interface used to increase speed from CPUs to FPGAs in expansion slots. The Coherent Accelerator Processor Interface (CAPI) connects the expansion bus and allows users to access resources external to the main CPU and memory with up to 238 GB/sec bus speed, thus overcoming a key limitation when working with large data sets.

Our users do take advantage of the capabilities of the FPGA card, they not only complete their tasks more quickly, but also free up additional CPU hours for other researchers on the cluster. The solution has provided a net benefit in resource utilization and thus has allowed _all_ users to do more computing, faster.

## **The GZIP coprocessor success story**

Initial tests showed compressing a small job with a 22-gigabyte file using the CPU would take over 9 minutes of time versus running on the FPGA card the same file would finish in 19 seconds. These tests were changed to massively increase the data being compressed and found that a job that would take 67 hours on the CPU would only take 50 minutes on the FPGA.

The FPGA GZIP coprocessor has allowed our researchers and staff to quickly recover valuable file space, while speeding up analytics and processing. The coprocessor has its own queue allowing users to submit jobs that can access the gzip card rather than wait to use it interactively. As the coprocessor can only be utilized by a single process at any given time, using the queuing system allows for a mechanism where multiple users can submit jobs to use the card without over-loaded card since the queue waits for one job to finished before beginning the next.

We have seen as much as a 100-fold increase in the rate at which we can compress and decompress data to and from our storage cluster. These data largely consist of text-based strings (e.g., A, C, T and G nucleotides), meaning they are highly compressible.

The compression ratio achieved with the gzip card is inferior to that obtained by running gzip directly through the main processor. Our observations indicate that the gzip card yields approximately 80% of the compression obtained using standard methods. This was within an acceptable range for our users since the speed of both compression and decompression is so much greater than those achieved by the standard methods.

<table><tbody><tr><td>15 GB .fastq sequence file</td><td><strong>Compressed</strong></td><td><strong>Runtime</strong></td><td><strong>Compression ratio</strong></td><td><strong>Compression</strong><strong>?</strong><strong> rate (GB/s)</strong></td></tr><tr><td>CPU gzip</td><td>3.1 GB</td><td>28m 53s</td><td>5.16</td><td>0.006</td></tr><tr><td>CPU gzip -9</td><td>2.9 GB</td><td>133M 36s</td><td>5.17</td><td>0.001</td></tr><tr><td>Power/CAPI Genwqe_gzip</td><td>4.2 GB</td><td>71 seconds</td><td>3.57</td><td>0.152</td></tr></tbody></table>

**Table-1:** Compression ratio comparison between CPU and FPGA of a 15GB fastq DNA sequence file.

@ -1,58 +0,0 @@
---
title: "A Deep Dive into A2I and A2O"
date: "2020-12-21"
categories:
- "blogs"
tags:
- "openpower"
- "ibm"
- "power"
- "openpower-foundation"
- "open-source"
- "a2i"
- "a2o"
- "open-hardware"
- "developer-community"
- "isa"
- "power-processor-core"
---

**By [Abhishek Jadhav,](https://www.linkedin.com/in/abhishek-jadhav-60b30060/) Lead Open Hardware Developer Community (India) and Freelance Tech Journalist**

After the opening of the [POWER instruction set architecture (ISA)](https://newsroom.ibm.com/2019-08-21-IBM-Demonstrates-Commitment-to-Open-Hardware-Movement) last August, there have been many developments from IBM and its community.

Some major contributions include OpenPOWERs A2I and A2O POWER processor core.

The OpenPOWER Foundation, which is under the umbrella of the Linux Foundation, works on the advocacy of POWER Instruction Set Architecture and its usage in the industry.

## **What is A2I the core?**

[A2I core](https://github.com/openpower-cores/a2i/blob/master/rel/doc/A2_BGQ.pdf) was created as a high-frequency four-threaded design, optimized for throughput and targeted for 3 GHz in 45nm technology. It was created to provide high streaming throughput, balancing performance and power.

_![](images/IB1-1024x680.png)_

_“With a strong foundation of the open POWER ISA and now the A2I core, the open source hardware movement is poised to accelerate faster than ever,” said James Kulina, Executive Director, OpenPOWER Foundation._

A2I was developed as a processor for customization and embedded use in system-on-chip (SoC) devices, however, it's not limited to that— it can be seen in supercomputers with appropriate accelerators. There is a diverse range of applications associated with the core including streaming, network processing, data analysis.

We have an [Open Hardware Developer Community](https://www.linkedin.com/groups/12431698/) and contributors across India working on A2I in multiple use cases. where there has been an increasing contribution from the open source community.

If you want a headstart on A2I core, check out this short [tutorial](https://github.com/openpower-cores/a2i/blob/master/rel/doc/a2_build_video.md) on how to get started.

## **The launch of A2O**

A couple of months after the A2I cores release at [OpenPOWER Summit 2020](https://events.linuxfoundation.org/openpower-summit-north-america/), the OpenPOWER Foundation announced the A2O POWER processor core, an out-of-order follow-up to the A2I core. The A2O processor core is now open-source as a POWER ISA core for embedded use in SoC designs. The A2O offers better single-threaded performance, supports PowerISA 2.07, and has a modular design.

![](images/IMB2-1024x575.png)

Potential A2O POWER processor core applications include artificial intelligence, autonomous driving, and secure computing.

If you want to get started with A2O POWER processor core, watch this short [tutorial](https://github.com/openpower-cores/a2o/blob/master/rel/doc/a2_build_video.md).

The A2O reference manual is available [here](https://github.com/openpower-cores/a2o/blob/master/rel/doc/A2O_UM.pdf).

 

Join the [Open Hardware Developer Community](https://www.linkedin.com/groups/12431698/) to engage in exciting projects on A2I and A2O processor core.

_Source: All the images were taken from the_ [_Github Repo_](https://github.com/openpower-cores/a2i/tree/master/rel/doc) _and_ [_OpenPOWER Summit North America 2020_](https://openpowerna2020.sched.com/event/eOyb/ibm-open-sources-the-a2o-core-bill-flynn-ibm)_._

@ -1,38 +0,0 @@
---
title: "A POWERFUL Birthday Gift to Moore's Law"
date: "2015-04-12"
categories:
- "blogs"
tags:
- "featured"
---

By Bradley McCredie

President, OpenPOWER Foundation

As we prepare to join the computing world in celebrating the 50th anniversary of Moores Law, we cant help but notice how the aging process has slowed it down. In fact, in a [recent interview](http://spectrum.ieee.org/computing/hardware/gordon-moore-the-man-whose-name-means-progress) with IEEE Spectrum, Moore said, “I guess I see Moores Law dying here in the next decade or so.”  But we have not come to bury Moores Law.  Quite the contrary, we need the economic advancements that are derived from the scaling Moores law describes to survive -- and they will -- if it adapts yet again to changing times.

It is clear, as the next generation of warehouse scale computing comes of age, sole reliance on the “tick tock” approach to microprocessor development is no longer viable.  As I told the participants at our first OpenPOWER Foundation summit last month in San Jose, the era of relying solely on the generation to generation improvements of the general-purpose processor is over.  The advancement of the general purpose processor is being outpaced by the disruptive and surging demands being placed on todays infrastructure.  At the same time, the need for the cost/performance advancement and computational growth rates that Moores law used to deliver has never been greater.   OpenPOWER is a way to bridge that gap and keep Moores Law alive through customized processors, systems, accelerators, and software solutions.  At our San Jose summit, some of our more than 100 Foundation members, spanning 22 countries and six continents, unveiled the first of what we know will be a growing number of OpenPOWER solutions, developed collaboratively, and built upon the non-proprietary IBM POWER architecture. These solutions include:

Prototype of IBMs first OpenPOWER high performance computing server on the path to exascale

- First commercially available OpenPOWER server, the TYAN TN71-BP012
- First GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950
- Rackspace open server specification and motherboard mock-up combining OpenPOWER, Open Compute and OpenStack

Together, we are reimagining the data center, and our open innovation business model is leading historic transformation in our industry.

The OpenPOWER business model is built upon a foundation of a large ecosystem that drives innovations and shares the profits from those innovations. We are at a point in time where business model innovation is just as important to our industry as technology innovation.

You dont have to look any further than OpenPOWER Chairman, Gordon MacKeans company, Google to see an example of what I mean. While the technology that Google creates and uses is leading in our industry, Google would not be even be a shadow of the company it is today without its extremely innovative business model. Google gives away all of its advanced technology for free and monetizes it through other means.

In fact if you think about it, most all of the fastest growing “new companies” in our industry are built on innovative technology ideas, but the most successful ones are all leveraging business model innovations as well.

The early successes of the OpenPower approach confirm what we all know to expedite innovation, we must move beyond a processor and technology-only design ecosystem to an ecosystem that takes into account system bottlenecks, system software, and most importantly, the benefits of an open, collaborative ecosystem.

This is about how organizations, companies and even countries can address disruptions and technology shifts to create a fundamentally new competitive approach.

No one company alone can spark the magnitude or diversity of the type of innovation we are going to need for the growing number of hyper-scale data centers. In short, we must collaborate not only to survive…we must collaborate to innovate, differentiate and thrive.

The OpenPOWER Foundation, our global team of rivals, is modeling what we IBMers like to call “co-opetition” competing when it is in the best interest of our companies and cooperating with each other when it helps us all.  This combination of breakthrough technologies and unprecedented collaboration is putting us in the forefront of the next great wave of computing innovation.  Which takes us back to Moores Law.  In 1965, when Gordon Moore gave us a challenge and a roadmap to the future, there were no smartphones or laptops, and wide-scale enterprise computing was still a dream.  None of those technology breakthroughs would have been possible without the vision of one man who shared it with the world.  OpenPOWER is a bridge we share to a new era. Who knows what breakthroughs it will spawn in our increasingly technology-driven and connected world.  As Moores Law has shown us, the future is wide open.

@ -1,31 +0,0 @@
---
title: "A2I POWER Processor Core Contributed to OpenPOWER Community to Advance Open Hardware Collaboration"
date: "2020-06-30"
categories:
- "blogs"
tags:
- "openpower"
- "ibm"
- "openpower-foundation"
- "linux-foundation"
- "power-isa"
- "open-source"
- "ibm-a2i"
- "a2i-power-processor"
- "open-source-hardware"
- "open-source-summit"
---

At The Linux Foundation Open Source Summit today, the OpenPOWER Foundation announced a major contribution to the open source ecosystem: the IBM A2I POWER processor core design and associated FPGA environment. Following the [opening of the POWER Instruction Set Architecture (ISA)](https://newsroom.ibm.com/2019-08-21-IBM-Demonstrates-Commitment-to-Open-Hardware-Movement) last August, todays announcement further enables the OpenPOWER Foundation to cultivate an ecosystem of open hardware development.

![A2I POWER Processor Core](images/A2I-POWER-Processor-Core-1024x583.png)

The A2I core is an in-order multi-threaded 64-bit POWER ISA core that was developed as a processor for customization and embedded use in system-on-chip (SoC) devices. It was designed to provide high streaming throughput while balancing performance and power. Originally the “wire-speed processor” of the Edge-of-Network SoC called PowerEN, it was later selected as the general purpose processor used in IBMs BlueGene/Q family of systems, which helped to advance scientific discovery over the last decade. Built for modularity, A2I has the ability to add an Auxiliary Execution Unit (AXU) that is tightly-coupled to the core, enabling many possibilities for special-purpose designs for new markets tackling the challenges of modern workloads.

“A2I has demonstrated its durability over the last decade - its a powerful technology with a wide range of capabilities,” said Mendy Furmanek, President, OpenPOWER Foundation and Director, POWER Open Hardware Business Development, IBM. “Were excited to see what the open source community can do to modernize A2I with todays open POWER ISA and to adapt the technology to new markets and diverse use cases.”

“With a strong foundation of the open POWER ISA and now the A2I core, the open source hardware movement is poised to accelerate faster than ever,” said [James Kulina](https://www.linkedin.com/in/james-kulina/), Executive Director, OpenPOWER Foundation. “A2I gives the community a great starting point and further enables developers to take an idea from paper to silicon.”

The A2I core is available on GitHub and [can be accessed here](https://github.com/openpower-cores/a2i).

[Register for OpenPOWER Summit North America 2020](https://events.linuxfoundation.org/openpower-summit-north-america/) - a free, virtual experience - to learn more about the A2I core and other developments across the OpenPOWER ecosystem.

@ -1,27 +0,0 @@
---
title: "Academic and Industry Experts Share Expertise During OpenPOWER and AI Workshop at Loyola Institute of Technology"
date: "2019-03-07"
categories:
- "blogs"
---

By [Dr. Sujatha Jamuna Anand](https://www.linkedin.com/in/dr-sujatha-jamuna-anand-4251ba92/), Principal, Loyola Institute of Technology

![](images/loyola-1-300x150.jpg)

We recently held the OpenPOWER and AI training workshop in Chennai, India. In addition to faculty and students from [Loyola Institute of Technology](https://litedu.in/), we were joined by academic and industry experts from [IBM](https://www.ibm.com/us-en/?ar=1), [Open Computing Singapore](https://opencomputing.sg/), [Indian Institute of Technology Madras](https://www.iitm.ac.in/), [University of Engineering and Management Kolkata](http://uem.edu.in/uem-kolkata/) and [Object Automation](http://www.object-automation.com/).

Attendees learned from a number of sessions:

- [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), IBM shared insight on AI, deep learning inferencing and edge computing. As part of his presentation, he shared several use cases which have been deployed in multiple industries around the world.
- [Jayaram Kizhekke Pakkathillam](https://www.linkedin.com/in/jayaram-kizhekke-pakkathillam-6b2b0963/), IIT Madras gave a brief introduction about unmanned aerial vehicles (UAVs) and the projects hes worked on as part of IIT Madras Aerospace Engineering department. He also discussed how UAVs are effectively used for military and agricultural purposes with examples of different AI systems.
- [Wilson Josup](https://www.linkedin.com/in/wilson-josup-cdcp-ccca-a18ab943/), Open Computing Singapore spoke about the difference between CPUs and GPUs, different types and use cases of GPUs and how OpenPOWER architecture innovations contribute to improved performance from applications.
- [Gayathri Venkataramanan](https://www.linkedin.com/in/gayathri-venkataramanan-0a8831166/), Object Automation and [Prince Barai](https://www.linkedin.com/in/prince-pratik7/), University of Engineering and Management Kolkata delivered various AI use cases with excellent examples.

Beyond features of AI, several presentations and demonstrations answered how data-driven innovation can be brought to life, and what steps are needed to move AI out of the lab and into mainstream business.

The OpenPOWER and AI Workshop provided opportunities for young students to initiate their own AI-related projects and collaborations.

 

![](images/loyola-2-300x225.jpg)

@ -1,26 +0,0 @@
---
title: "Accelerated Photodynamic Cancer Therapy Planning with FullMonte on OpenPOWER"
date: "2015-01-19"
categories:
- "blogs"
---

### Abstract

Photodynamic therapy (PDT) is a minimally-invasive cancer therapy which uses a light-activated drug (photosensitizer/PS). When the photosensitizer absorbs a photon, it excites tissue oxygen into a reactive state which causes very localized cell damage. The light field distribution inside the tissue is therefore one of the critical parameters determining the treatment's safety and efficacy. While FDA-approved and used for superficial indications, PDT has yet to be widely adopted for interstitial use for larger tumours using light delivered by optical fibres due to a lack of simulation and planning optimization software. Because tissue at optical wavelengths has a very high scattering coefficient, extensive Monte Carlo modeling of light transport is required to simulate the light distribution for a given treatment plan. To enable PDT planning, we demonstrate here our “FullMonte” system which uses a CAPI-enabled FPGA to simulate light propagation 4x faster and 67x more power-efficiently than a highly-tuned multicore CPU implementation. With coherent low-latency access to host memory, we are not limited by the size of on-chip memory and are able to transfer results to and from the accelerator rapidly, which will be support our iterative planning flow. Potential advantages of interstitial PDT include less invasiveness and potential post-operative complications than surgery, better damage targeting and confinement than radiation therapy, and no systemic toxicity unlike chemotherapy. While attractive for developed markets for better outcomes, PDT is doubly attractive in emerging regions because it offers the possibility of a single-shot treatment with very low-cost and even portable equipment supported by remotely-provided computing services for planning.

### Bios

Jeffrey Cassidy, MASc, PEng is a PhD candidate in Electrical and Computer Engineering at the University of Toronto. Lothar Lilge, PhD is a senior scientist at the Princess Margaret Cancer Centre and a professor of Medical Biophysics at the University of Toronto. Vaughn Betz, PhD is the NSERC-Altera Chair in Programmable Silicon at the University of Toronto.

### Acknowledgements

The work is supported by the Canadian Institutes of Health Research, the Canadian Natural Sciences and Engineering Research Council, IBM, Altera, Bluespec, and the Southern Ontario Smart Computing Innovation Platform.

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Cassidy-Jeff_OPFS22015_031015_final.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Cassidy-Jeff_OPFS22015_031015_final.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,92 +0,0 @@
---
title: "Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER"
date: "2015-11-13"
categories:
- "blogs"
tags:
- "capi"
- "fpga"
- "xilinx"
- "kvs"
---

_By Michaela Blott, Principal Engineer, Xilinx Research_

First, a bit of background-- I lead a research team in the European headquarters of Xilinx where we look into FPGA-based solutions for data centers. We experiment with the most advanced platforms and tool flows, hence our interest in OpenPOWER. If you haven't worked with an FPGA yet, its a fully programmable piece of silicon that allows you to create the perfect hardware circuit for your application thereby achieving best-in-class performance through customized data-flow architectures, as well as substantial power savings.  That means we can investigate how to make data center applications faster, smarter and greener while scrutinizing silicon features and tools flows. Our first application deep-dive was, and still is, key-value stores.

Key-value stores (KVS) are a fundamental part of todays data center functionality. Facebook, Twitter, YouTube, flickr and many others use key-value stores to implement a tier of distributed caches for their web content to alleviate access bottlenecks on relational databases that dont scale well. Up to 30% of data center servers implement key-value stores. But data centers are hitting a wall with performance requirements that drive trade-offs between high DRAM costs (in-memory KVS), bandwidth, and latency.

Weve been investigating KVS stores such as memcached since 2013 \[1,2\]. Initially the focus was on pure acceleration and power reduction. Our work demonstrated a substantial 35x performance/power versus the fastest x86 results published at the time. The trick was to completely transform the multithreaded software implementation into a data-flow architecture inside an FPGA as shown below.

\[caption id="attachment\_2117" align="aligncenter" width="693"\]![Fig 1](images/Fig-1.jpg) Figure 1: 10Gbps memcached with FPGAs\[/caption\]

However, there were a number of limitations: First, we were not happy with the constrained amount of DRAM that can be attached to an FPGA -- capacity is really important in the KVS context. Secondly, we were concerned about supporting more functionality.   For example, for protocols like Redis with its 200 commands, things can get complicated. Thirdly, we worried about ease-of-use, which is a typical adoption barrier for FPGAs. Finally, things become even more interesting once you add intelligence on top of your data: data analytics, object recognition, encryption, you name it. For this we really need a combination of compute resources that coherently shares memory. Thats exactly why OpenPOWER presented a unique and most timely opportunity to experiment with coherent interfaces.

**Benchmarking CAPI**

CAPI, the Coherent Accelerator Processor Interface, enables high performance and simple programming models for attaching FPGAs to POWER8 systems. First, we benchmarked PCI-E and CAPI acceleration against x86 in-memory models to determine the latency of PCI-E and CAPI. The results are explained below:

\[caption id="attachment\_2118" align="aligncenter" width="619"\]![Figure2_new](images/Figure2_new.jpg) Figure 2: System level latency OpenPower with FPGA vs x86\[/caption\]

**Latency**

PCI-E DMA Engines and CAPI perform significantly better than typical x86 implementations. At 1.45 microseconds, CAPI operations are so low-latency that overall system-level impact is next to negligible.  Typical x86 installations service memcached requests within a range of 100s to 1000s of microseconds. Our OpenPower CAPI installation services the same requests in 3 to 5 microseconds, as illustrated in Figure 2 (which uses a logarithmic scale).

\[caption id="attachment\_2119" align="aligncenter" width="698"\]![Figure3_new](images/Figure3_new.jpg) Figure 3: PCIe vs CAPI Bandwidth over transfer sizes\[/caption\]

**Bandwidth**

Figure 3 shows measured bandwidth vs. transfer size for CAPI in comparison to a generic PCIe DMA. The numbers shown are actual measurements \[4\] and are representative in that PCIe performance is typically very low for small transfer sizes and next to optimal for large transfer sizes. So for small granular access, CAPI far outperforms PCIe. Because of this, CAPI provides a perfect fit for the small transfer sizes as required in the KVS scenario. For implementing object storage in host memory, we are really only interested in using CAPI in the range of transfer sizes of 128 bytes to 1kbyte. Smaller objects can be easily accommodated in FPGA-attached DRAM; larger objects can be accommodated in Flash (see also our HotStorage 2015 publication \[3\]).

**FPGA Design**

Given the promising benchmarking results, we proceeded to integrate the host memory via CAPI. For this we created a hybrid memory controller which routes and merges requests and responses between the various storage types, handles reordering, and provides a gearbox for varying access speeds and bandwidths. With these simple changes, we now have up to 1 Terabyte of coherent memory space at our disposal without loss of performance! Figure 4 shows the full implementation inside the FPGA.

\[caption id="attachment\_2120" align="aligncenter" width="748"\]![Figure4](images/Figure4.jpg) Figure 4: Memcached Implementation with OpenPower and FPGA\[/caption\]

**Ease of Use**

Our next biggest concern was ease of use for both FPGA design entry as well as with respect to hostaccelerator integration. In regards to the latter, OpenPOWER exceeded our expectations. Using the provided API from IBM (libcxl) as well as the POWER Service Layer IP that resides within the FPGA (PSL), we completed system integration within a matter of weeks while saving huge amounts of code: 800 lines of code to be precise for x86 driver, memory allocation, and pinning, and 13.5k fewer instructions executed!

Regarding the FPGA design, it was of utmost importance to ensure that it is possible to create a fully functional and high-performing design through a high-level design flow (C/C++ at minimum), in the first instance using Xilinxs high-level synthesis tool, Vivado HLS. The good news was that we fully succeeded in doing this and the resulting application design was fully described in C/C++, achieving a 60% reduction in lines of code (11359 RTL vs 4069 HLS lines). The surprising bonus was that we even got a resource reduction for FPGA-savvy readers: 22% in LUTs & 30% in FFs. And let me add, just in case you are wondering, the RTL designers were at the top of their class!

The only low-level aspects left in the design flow are the basic infrastructure IP, such as memory controllers and network interfaces, which are still manually integrated. In the future, this will be fully automated through SDAccel. In other words, a full development environment that requires no further RTL development is on the horizon.

**Results**

\[caption id="attachment\_2121" align="aligncenter" width="693"\]![Figure5](images/Figure5.jpg) Figure 5: Demonstration at the OpenPower Summit 2015\[/caption\]

We demonstrated the first operational prototype of this design at Impact in April 2014 and then demonstrated the fully operational demo vehicle (shown in Figure 5) including fully CAPI-enabled access to host memory at the OpenPOWER Summit in March 2015. The work is now fully integrated with [IBMs SuperVessel](http://www.ptopenlab.com). In the live demonstration, the OpenPOWER system outperforms an x86 implementation by 20x (see Figure 6)!

\[caption id="attachment\_2122" align="aligncenter" width="625"\]![kvs_comparison](images/kvs_comparison-1024x577.jpg) Figure 6: Screenshot of network tester showing response traffic rates from OpenPower with FPGA acceleration versus x86 software solution\[/caption\]

**Summary**

The Xilinx demo architecture enables key-value stores that can operate at **60Gbps with 2TB value-store capacity** that fits within a 2U OpenPOWER Server. The architecture can be easily extended. We are actively investigating using Flash to expand value storage even further for large granular access. But most of all, we are really excited about the opportunities for this architecture when combining this basic functionality with new capabilities such as encryption, compression, data analytics, and face & object recognition!

**Getting Started**

- Visit [Xilinx at SC15](http://www.xilinx.com/about/events/sc15.html)! November 15-19, Austin, TX.
- Learn more about [POWER8 CAPI](http://www-304.ibm.com/webapp/set2/sas/f/capi/home.html)
- Purchase a CAPI developer kit from [Nallatech](http://www.nallatech.com/solutions/openpower-capi-developer-kit-for-power-8/) or [AlphaData](http://www.alpha-data.com/dcp/capi.php)
- License this technology through [Xilinx](http://www.xilinx.com/) today.  We work directly with customers and data centers to scale performance/watt in existing deployments with hardware based KVS accelerators. If you are interested in this technology, please contact us.

\==================================================================================

**References**

_\[1\] M.Blott, K.Vissers, K.Karras, L.Liu, Z. Istvan, G.Alonso: HotCloud 2013; Achieving 10Gbps line-rate key-value stores with FPGAs_

_\[2\] M.Blott, K. Vissers: HotChips14; Dataflow Architectures for 10Gbps Line-rate Key-value-Stores._

_\[3\] M.Blott, K.Vissers, L.Liu: HotStorage 2015; Scaling out to a Single-Node 80Gbps Memcached Server with 40Terabytes of Memory_

_\[4\] PCIe bandwidth reference numbers were kindly provided by Noa Zilberman & Andrew Moore from Cambridge University_

* * *

**_About Michaela Blott_**

![Michaela Blott](images/Michaela-Blott.png)

Michaela Blott graduated from the University of Kaiserslautern in Germany. She worked in both research institutions (ETH and Bell Labs) as well as development organizations and was deeply involved in large scale international collaborations such as NetFPGA-10G. Today, she works as a principal engineer at the Xilinx labs in Dublin heading a team of international researchers, investigating reconfigurable computing for data centers and other new application domains. Her expertise includes data centers, high-speed networking, emerging memory technologies and distributed computing systems, with an emphasis on building complete implementations.

@ -1,31 +0,0 @@
---
title: "Accelerator Opportunities with OpenPower"
date: "2015-01-16"
categories:
- "blogs"
---

### Abstract

The OpenPower architecture provides unique capabilities which will enable highly effective and differentiated acceleration solutions.   The OpenPower Accelerator Workgroup is chartered to develop both hardware the software standards which provide vendors the ability to develop these solutions.  The presentation will cover an overview of the benefits of the OpenPower architecture for acceleration solutions.   We will provide an overview of the Accelerator Workgroups plans and standards roadmap.   We will give an overview of the OpenPower CAPI development kit.   We will also walk through an example of a CAPI attached acceleration solution.

### Presentation agenda

- Overview of opportunity for OpenPower acceleration solutions
- OpenPower Accelerator workgroup charter and standards roadmap
- OpenPower CAPI Development Kit
- CAPI attached acceleration solution example

### Bio

[Nick Finamore](https://www.linkedin.com/profile/view?id=4723882&authType=NAME_SEARCH&authToken=2y98&locale=en_US&srchid=32272301421437850712&srchindex=3&srchtotal=8&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421437850712%2CVSRPtargetId%3A4723882%2CVSRPcmpt%3Aprimary), Altera Corporation Product Marketing Manager for Software Development Tools  Chairperson,  OpenPower Foundation Accelerator Workgroup

For the past 3 years Nick has been leading Alteras computing acceleration initiative and the marketing  of Alteras SDK for OpenCL.  Previously Nick was in several leadership positions at early stage computing and networking technology companies including Netronome, Ember(SiLabs) and Calxeda.   Nick also had an 18 year career at Intel where he held several management positioning including general manager of the network processor division.

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Finamore-Nick_OPFS2015_Altera_031215_final.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Finamore-Nick_OPFS2015_Altera_031215_final.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,30 +0,0 @@
---
title: "Singapore's A*CRC Joins the OpenPOWER Foundation to Accelerate HPC Research"
date: "2016-03-17"
categories:
- "blogs"
tags:
- "featured"
---

_By Ganesan Narayanasamy, Senior Manager, IBM Systems_

Singapores Agency for Science, Technology and Research (A\*STAR) is the largest government funded research organization in Singapore, with over 5,300 personnel in 14 research institutes across the country.

[![A STAR Computational Resource Centre](images/A-STAR-Computational-Resource-Centre.png)](https://openpowerfoundation.org/wp-content/uploads/2016/03/A-STAR-Computational-Resource-Centre.png)

A\*STAR Computational Resource Centre (A\*CRC) provides high performance computational (HPC) resources to the entire A\*STAR research community. Currently A\*CRC supports HPC needs of an 800 member user community and manages several high-end computers, including an IBM 822LC system with NVIDIA K80 GPU Cards and Mellanox EDR switch to port and optimize the HPC applications. It is also responsible for very rapidly growing data storage resources.

A\*CRC will work with IBM and the OpenPOWER Foundation to hasten its path to develop applications on OpenPOWER Systems leveraging the Foundations ecosystem of technology.

https://youtu.be/F07fJHhQdu4

Experts it A\*CRC will explore the range of scientific applications that leverage the Power architecture as well as NVIDIAs GPU and Mellanoxs 100 GB/sec Infiniband switches. The switches are designed to work with IBM's Coherent Application Processor Interface (CAPI), an OpenPOWER technology that allows attached accelerators to connect with the Power chip at a deep level.

A\*CRC also will work with the OpenPOWER Foundation on evolving programming models such as OpenMP, the open multiprocessing API designed to support multi-platform shared memory.

“We need to anticipate the rise of new high performance computing architectures that bring us closer to exascale and prepare our communities,” A\*CRC CEO Marek Michalewicz noted in a statement.

[![SCF2016-logo_final_retina2](images/SCF2016-logo_final_retina2-300x129.png)](https://openpowerfoundation.org/wp-content/uploads/2016/03/SCF2016-logo_final_retina2.png)

This week, A\*STAR is hosting the [Singapore Supercomputing Frontiers Conference](http://supercomputingfrontiers.com/2016/). To learn more about their work, take part in our OpenPOWER workshop on March 18 and stay tuned for additional updates.

@ -1,24 +0,0 @@
---
title: "Advancing the Human Brain Project with OpenPOWER"
date: "2016-10-27"
categories:
- "blogs"
tags:
- "featured"
---

_By Dr. Dirk Pleiter, Research Group Leader, Jülich Supercomputing Centre_

![Human Brain Project and OpenPOWER members NVIDIA, IBM](images/HBP_Primary_RGB-1-1024x698.png)

The [Human Brain Project](https://www.humanbrainproject.eu/) (HBP), a flagship project [funded by the European Commission](http://ec.europa.eu/research/fp7/index_en.cfm), has set itself an ambitious goal: Unifying our understanding of the human brain. To achieve it, researchers need a High-Performance Analytics and Compute Platform comprised of supercomputers with features that are currently not available, but OpenPOWER is working to make them a reality.

Through a Pre-Commercial Procurement (PCP) the HBP initiated the necessary R&D, and turned to the OpenPOWER Foundation for help. During three consecutive phases, a consortium of [IBM and NVIDIA has successfully been awarded with R&D contracts](http://www.fz-juelich.de/SharedDocs/Pressemitteilungen/UK/EN/2016/16-09-27hbp_pilotsysteme.html). As part of this effort, a pilot system called [JURON](https://hbp-hpc-platform.fz-juelich.de/?page_id=1073) (a combination of Jülich and neuron) has been installed at Jülich Supercomputing Centre (JSC). It is based on the [new IBM S822LC for HPC servers](https://www.ibm.com/blogs/systems/ibm-nvidia-present-nvlink-server-youve-waiting/), each equipped with two POWER8 processors and four NVIDIA P100 GPUs.

Marcel Huysegoms, a scientist from [the Institute for Neuroscience and Medicine](http://www.fz-juelich.de/inm/EN/Home/home_node.html), with support from the JSC could demonstrate soon after deployment the usability of the system for his brain image registration application. Exploiting the processing capabilities of the GPUs without further tuning, could achieve a significant speed-up compared to the currently used production system based on Haswell x86 processors and K80 GPUs.

Not only do the improved compute capabilities matter for brain research, but also by designing and implementing the Global Sharing Layer (GSL), the non-volatile memory cards mounted on all nodes became a byte addressable, globally accessible memory resource. Using JURON it could be shown that data can be read at a rate that is only limited by network performance. These new technologies will open new opportunities for enabling data-intensive workflows in brain research, including data visualization.

The pilot system will be the first system based on POWER processors where graphics support is being brought to the HPC node. In combination with the GSL it will be possible to visualize large data volumes that are, as an example, generated by brain model simulations. Flexible allocation of resources to compute applications, data analytics and visualization pipelines will be facilitated through another new component, namely the dynamic resource management. It allows for suspension of execution of parallel jobs for a later restart with a different number of processes.

JURON clearly demonstrates the potential of a technology ecosystem settled around a processor architecture with interfaces that facilitate efficient integration of various devices for efficient processing, moving and storing of data. In other words, it demonstrates the collaborative potential of OpenPOWER.

@ -1,22 +0,0 @@
---
title: "Advancing the OpenPOWER vision"
date: "2015-01-16"
categories:
- "blogs"
---

### Abstract

Its been nearly a year since the public launch of OpenPower and the community of technology leaders that make up our community have made significant progress towards our original goals. While growth of the membership is a critical factor, our success will come from the technology provided through the open model and the value solutions that are enabled by leveraging that technology. Please join us as we highlight the key components that our member community have contributed to that open model and spotlight some examples of high value solutions enabled through members leveraging our combined capabilities and strengths.

### Speaker

[Gordon MacKean](https://www.linkedin.com/profile/view?id=1547172&authType=NAME_SEARCH&authToken=PNgl&locale=en_US&trk=tyah2&trkInfo=tarId%3A1421437126543%2Ctas%3AGordon%20McKean%2Cidx%3A1-1-1) is a Sr. Director with the Hardware Platforms team at Google. He leads the team responsible for the design and development of the server and storage products used to power Google data centers. Prior to Google, Gordon held management and design roles at several networking companies including Matisse Networks, Extreme Networks, and Nortel Networks. Gordon is a founder of OpenPOWER Foundation and serves as the Chairman of the Board of Directors. Gordon holds a Bachelors degree in Electrical Engineering from Carleton University.

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/MacKean-McCredie_OPFS2015_KEYNOTE15-03-16-gm5.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/MacKean-McCredie_OPFS2015_KEYNOTE15-03-16-gm5.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,24 +0,0 @@
---
title: "AI to Improve Rural Healthcare Discussed at OpenPOWER Summit Europe"
date: "2018-10-18"
categories:
- "blogs"
tags:
- "featured"
---

By Dr. Praveen Kumar B.A. M.B.B.S, M.D., professor, Department of Community Medicine, PES Institute of Medical Sciences and Research

It was great to attend the [OpenPOWER Summit Europe](https://openpowerfoundation.org/summit-2018-10-eu/) in Amsterdam earlier this month. As an academia member from a medical background, it was the first completely technical forum I had attended at an international level.

The [PES Institute of Medical Sciences](http://pesimsr.pes.edu/), India has been working with IBM and the OpenPOWER community recently on developing AI solutions for patient care in our rural facility. We are a tertiary care teaching institute catering to a rural population of around one million. I attended the OpenPOWER Summit Europe to discuss the need and opportunity for deploying AI solutions in our work.

<iframe style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 5px; max-width: 100%;" src="//www.slideshare.net/slideshow/embed_code/key/7A5W1wzbDtaDzh" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe>

**[Artificial Intelligence in Healthcare at OpenPOWER Summit Europe](//www.slideshare.net/OpenPOWERorg/artificial-intelligence-in-healthcare-at-openpower-summit-europe "Artificial Intelligence in Healthcare at OpenPOWER Summit Europe")** from **[OpenPOWERorg](https://www.slideshare.net/OpenPOWERorg)**

AI in health care was a featured theme throughout the OpenPOWER Summit Europe. Professor Florin Manaila demonstrated solutions he has worked on for breast cancer diagnosis and grading using image processing. And Professor Antonio Liotta spoke about machine learning and AI-related research in his lab.

The AI4Good Hackathon invited researchers from across the world to find solutions for health challenges particularly in cancer care. I was glad to see students from India and Europe participating.

I look forward to networking with other academic and industry teams to work on further developing model training and implementation. Through collaboration, institutions can partner together to secure funding and innovate toward a brighter future.

@ -1,39 +0,0 @@
---
title: "Algo-Logic Systems launches CAPI enabled Order Book running on IBM® POWER8™ server"
date: "2015-03-18"
categories:
- "press-releases"
- "blogs"
---

SANTA CLARA, Calif., March 16, 2015 /PRNewswire/ -- Algo-Logic Systems, a recognized leader in providing hardware-accelerated, deterministic, ultra-low-latency products, systems and solutions for accelerated finance, packet processing and embedded system industries, announced today availability of their new Coherent Accelerator Processor Interface (CAPI) enabled Full Order Book solution on IBM® POWER8™ systems. The CAPI enabled Order Book performs all feed processing and book building in logic inside a single Stratix V FPGA on the Nallatech P385 card. The system enables software to directly receive order book snapshots in the coherent shared memory with the least possible latency. The low latency Order Book is designed using the on-chip memory for customer book sizes with many thousands of open orders, up to 24 symbols, and reporting of six L-2 book levels. For use cases where millions of open orders and full market depth need to be tracked, the scalable CAPI enabled Order Book is still implemented with a single FPGA but stores data in off-chip memory.

Photo - [http://photos.prnewswire.com/prnh/20150314/181760](http://photos.prnewswire.com/prnh/20150314/181760)

The CAPI Order Book building process includes (i) receiving parsed market data feed messages, (ii) building and maintaining L-3 order-level replica of the exchange's displayable book, (iii) building L-2 books for each symbol with the market depth and weight summary of all open orders, (iv) reporting locally generated copy of the top-of-book with configurable amount of market depth (L-2 snapshots) as well as the last trade information when orders execute. By using the IBM POWER8 server, algorithms can run on the highest number of cores and seamlessly integrate with the Order Book hardware accelerator by means of the coherent shared memory. Through simple memory-mapped IO (MMIO) address space, all the parameters are configurable and statistics can be easily read from software. Algo-Logic's CAPI enabled Full Order Book achieves deterministic, ultra-low-latency without jitter regardless of the number of tracked symbols at data rates of up to 10 Gbps. Key features include:

- Accelerated Function Unit (AFU) is implemented on FPGA under CAPI
- Full Order Book with a L-2 default size of 6 price-levels per symbol, fully scalable to larger sizes
- By default L-2 snapshots are generated for each symbol
- The number of symbols in use and their respective snapshots are user configurable
- L-2 snapshot generation frequency is also user configurable on an event basis or at a customizable interval
- Full Order Book with a L-2 default size of 6 price-levels per symbol, fully scalable to larger sizes
- By default L-2 snapshots are generated for each symbol
- The number of symbols in use and their respective snapshots are user configurable
- L-2 snapshot generation frequency is also user configurable on an event basis or at a customizable interval
- Full Order Book output logic seamlessly connects to customer's proprietary algorithmic trading strategies
- Trader has access to the latest market depth (L-2 snapshots) in coherent shared memory
- L-3 Book updates complete with processing latency of less than 230 nanoseconds
- L-2 Book updates complete with processing latency of less than 120 nanoseconds

The CAPI Order Book can be seamlessly integrated with other components of Algo-Logic's Low Latency Application Library, including pre-built protocol parsing libraries, market data filters, and TCP/IP endpoints to deploy complete tick-to-trade applications within a single Stratix V FPGA platform.

Algo-Logic's world-class hardware accelerated systems and solutions are used by banks, trading firms, market makers, hedge-funds, and financial institutions to accelerate their network processing for protocol parsing, symbol filtering, Risk-Checks (sec 15c 3-5), order book processing, order injection, proprietary trading strategies, high frequency trading, financial surveillance systems, and algorithmic trading.

Availability: The CAPI Order Book solution is currently shipping, for additional information please contact [Info@algo-logic.com](mailto:Info@algo-logic.com) or visit our website at: [www.algo-logic.com](http://www.algo-logic.com/)

About Algo-Logic Systems Algo-Logic Systems, Inc., is the recognized leader and developer of Gateware Defined Networking® (GDN) for Field Programmable Gate Array (FPGA) devices. Algo-Logic IP-Cores are used for accelerated finance, packet processing and classification in datacenters, and real-time data acquisition and processing in embedded hardware systems. The company has extensive experience in building complete network processing system solutions in FPGA logic.

To view the original version on PR Newswire, visit:[http://www.prnewswire.com/news-releases/algo-logic-systems-launches-capi-enabled-order-book-running-on-ibm-power8-server-300050631.html](http://www.prnewswire.com/news-releases/algo-logic-systems-launches-capi-enabled-order-book-running-on-ibm-power8-server-300050631.html)

SOURCE Algo-Logic Systems

@ -1,9 +0,0 @@
---
title: "Altera Brings FPGA-based Acceleration to IBM Power Systems and Announces Support for OpenPOWER Consortium"
date: "2014-11-18"
categories:
- "press-releases"
- "blogs"
---

San Jose, Calif., November 18, 2013—Altera Corporation (NASDAQ: ALTR) today announced the latest release of the Altera SDK for OpenCL supports IBM Power Systems servers as an OpenCL system host. Customers are now able to develop OpenCL code that targets IBM Power Systems CPUs and accelerator boards with Altera FPGAs as a high-performance compute solution. FPGA accelerated systems can achieve a 5-20X performance boost over standard CPU based servers. Altera will showcase the performance advantage of using FPGAs to accelerate IBM Power Systems, as well as other OpenCL-focused demonstrations, this week at SuperComputing 2013 in booth #4332.

@ -1,9 +0,0 @@
---
title: "Altera Joins IBM OpenPOWER Foundation to Enable the Development of Next-Generation Data Centers"
date: "2014-03-24"
categories:
- "press-releases"
- "blogs"
---

San Jose, Calif., March 24, 2014 Altera Corporation (Nasdaq: ALTR) today announced it joined the IBM OpenPOWER Foundation, an open development alliance based on IBM's POWER microprocessor architecture. Altera will collaborate with IBM and other OpenPOWER Foundation members to develop high-performance compute solutions that integrate IBM POWER CPUs with Alteras FPGA-based acceleration technologies for use in next-generation data centers.

@ -1,35 +0,0 @@
---
title: "American Megatrends Custom Built Server Management Platform for OpenPOWER"
date: "2015-11-13"
categories:
- "blogs"
tags:
- "power8"
- "ami"
---

**_By Christine M. Follett, Marketing Communications Manager, American Megatrends, Inc._**

As one of the newest members of the OpenPOWER Foundation, we at American Megatrends, Inc. (AMI) are very excited get started and contribute to the mission and goals of the Foundation. Our President and CEO, Subramonian Shankar, who founded the company thirty years ago, shares his thoughts on joining the Foundation:

“Participating in OpenPOWER with partners such as IBM and TYAN will allow AMI to more rapidly engage as our market continues to grow, and will ensure our customers receive the industrys most reliable and feature-rich platform management technologies. As a market leader for core server firmware and management technologies, we are eager to assist industry leaders in enabling next generation data centers as they rethink their approach to systems design.”

![MegaRAC_SPX_logo_1500x1200](images/MegaRAC_SPX_logo_1500x1200-300x240.png) The primary technology that AMI is currently focusing on relative to its participation in the OpenPOWER Foundation is a full-featured server management solution called MegaRAC® SPX, in particular a custom version of this product developed for POWER8-based platforms. MegaRAC SPX for POWER8 is a powerful development framework for server management solutions composed of firmware and software components based on industry standards like IPMI 2.0, SMASH, Serial over LAN (SOL). It offers key serviceability features including remote presence, CIM profiles and advanced automation.

MegaRAC SPX for POWER8 also features a high level of modularity, with the ability to easily configure and build firmware images by selecting features through an intuitive graphical development tool chain. These features are available in independently maintained packages for superior manageability of the firmware stack. You can learn more about MegaRAC SPX at our website dedicated to AMI remote management technology [here](http://www.megarac.com/live/embedded/megarac-spx/).

![AMI dashboard](images/AMI-dashboard.png)

Foundation founding member TYAN has been an early adopter of MegaRAC SPX for POWER8, adopting it for one of their recent platforms. According to Albert Mu, Vice President of MITAC Computing Technology Corporations TYAN Business Unit, “AMI has been a critical partner in the development of our POWER8-based platform, the TN71-BP012, which is based on the POWER8 Architecture and provides tremendous memory capacity as well as outstanding performance that fits in datacenter, Big Data or HPC environments. We are excited that AMI has strengthened its commitment to the POWER8 ecosystem by joining the OpenPOWER Foundation.”

Founded in 1985, AMI is known worldwide for its AMIBIOS® firmware. From our start as the industrys original independent BIOS vendor, we have evolved to become a key supplier of state-of-the-art hardware, software and utilities to top-tier manufacturers of desktop, server, mobile and embedded computing systems.

With AMIs extensive product lines, we are uniquely positioned to provide all of the fundamental components to help OpenPOWER innovate across the system stack, providing performance, manageability, and availability for today's modern data centers. AMI prides itself on its unique position as the only company in the industry that offers products and services based on all of these core technologies.

AMI is extremely proud to join our fellow OpenPOWER member organizations working collaboratively to build advanced server, networking, storage and acceleration technology as well as industry-leading open source software. Together we can deliver more choice, control and flexibility to developers of next-generation hyperscale and cloud data centers.

* * *

**_About Christine M. Follett_**

_![Christine Follett](images/Christine-Follett.png)Christine M. Follett is Marketing Communications Manager for American Megatrends, Inc. (AMI). Together with the global sales and marketing team of AMI, which spans seven countries, she works to expand brand awareness and market share for the companys diverse line of OEM, B2B/Channel and B2C technology products, including AMI's industry leading Aptio® V UEFI BIOS firmware, innovative StorTrends® Network Storage hardware and software products, MegaRAC® remote server management tools and unique solutions based on the popular Android™ and Linux® operating systems._

@ -1,9 +0,0 @@
---
title: "AMI Joins OpenPOWER"
date: "2015-06-03"
categories:
- "press-releases"
- "blogs"
---


@ -1,11 +0,0 @@
---
title: "As Computing Tasks Evolve, Infrastructure Must Adapt"
date: "2014-06-11"
categories:
- "industry-coverage"
- "blogs"
---

The litany of computing buzzwords has been repeated so often that weve almost glazed over: mobile, social, cloud, crowd, big data, analytics.  After a while they almost lose their meaning.

Taken together, though, they describe the evolution of computing from its most recent incarnation — single user, sitting at a desk, typing on a keyboard, watching a screen, local machine doing all the work — to a much more amorphous activity that involves a whole new set of systems, relationships, and actions.

@ -1,67 +0,0 @@
---
title: "Were Attending the OpenPOWER Developer Congress — Heres Why You Should, Too. Insights from Nimbix, Mellanox, and Xilinx"
date: "2017-05-12"
categories:
- "blogs"
tags:
- "mellanox"
- "xilinx"
- "openpower-foundation"
- "openpower-foundation-developer-congress"
- "opfdevcon17"
- "nimbix"
---

Prominent OpenPOWER Foundation members have provided the reasons theyre taking time out of their busy days to support the [OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/) and send their experts and team members.

This is why YOU should attend too!

## **Nimbix Enables On-Demand Cloud for Developers**

### **Why [Nimbix](https://www.nimbix.net/) is Participating in the OpenPOWER Developer Congress**

As the leading public cloud provider for OpenPOWER and Power systems, Nimbix has embraced its role as a member in the OpenPOWER Foundation. Nimbix enables ISVs to get their applications ported and running on the Power architecture, and feels a responsibility to help the OpenPOWER community. This is what the company signed up for when it became a Silver-level member of the OpenPOWER Foundation.

Nimbix works to grow the Power ecosystem for application software and broaden the software portfolio on OpenPOWER. It facilitates this by:

- Providing ISVs and developers a Continuous Integration / Continuous Deployment (CI/CD) pipeline to deploy their source code on Power.
- Providing the ability to not just port, but to test at scale, on a supercomputer in the cloud that runs on OpenPOWER technology.
- Enabling ISVs that decide to go to market with their applications in the cloud to sell those applications directly in the Nimbix cloud.

### **What is Nimbix Bringing to the Developer Congress?**

“Nimbix is proud to support the OpenPOWER Developer Congress by providing resources to support Congress activities,” said Leo Reiter, CTO of Nimbix. “Through our support, we will be enabling the on-demand cloud infrastructure for the Congress so that all of the sessions and tracks can do their development in the cloud on the OpenPOWER platform.”

Leo will be part of the team instructing cloud development and porting to Power tracks at the Congress. "As an OpenPOWER Foundation member,” Leo said,, “I will be working with participants to get their applications running on Power in the cloud and providing them with tips and tools they can use to continue developing OpenPOWER applications post-conference.”

\[caption id="attachment\_4790" align="aligncenter" width="880"\][![OpenPOWER Developer Congress](images/OPDC-Web-Banner.jpg)](https://openpowerfoundation.org/wp-content/uploads/2017/05/OPDC-Web-Banner.jpg) [Click here to register for the OpenPOWER Developer Congress](https://openpowerfoundation.org/openpower-developer-congress/) - May 22-25 in San Francisco.\[/caption\]

## **Mellanox Educates on Caffe, Chainer, and TensorFlow**

### **Why [Mellanox](http://www.mellanox.com/) is Participating in the OpenPOWER Developer Congress**

Mellanox is not only a founding member of the OpenPOWER Foundation, but also a founding member of its Machine Learning Work Group.  AI / cognitive computing will improve our quality of life, drive emerging markets, and surely play a leading role in global economics. But to achieve real scalable performance with AI, being able to leverage cutting-edge interconnect capabilities is paramount. Typical vanilla networking just doesnt scale, so its important that developers are aware of the additional performance that can be achieved by understanding the critical role of the network.

Because Deep Learning applications are well-suited to exploit the POWER architecture, it is also extremely important to have an advanced network that unlocks the scalable performance of deep learning systems, and that is where the Mellanox interconnect comes in. The benefits of RDMA, ultra-low latency, and In-Network Computing deliver an optimal environment for data-ingest at the critical performance levels required by POWER-based systems.

Mellanox is committed to working with the industrys thought leaders to drive technologies in the most open way. Its core audience has always been end users — understanding their challenges and working with them to deliver real solutions. Today, more than ever, the developers, data-centric architects, and data scientists are the new generation of end users that drive the data center. They are defining the requirements of the data center, establishing its performance metrics, and delivering the fastest time to solution by exploiting the capabilities of the OpenPOWER architecture.  Mellanox believes that participating in the OpenPOWER Developer Congress gives the company an opportunity to educate developers on its state-of-art-networking and also demonstrates its commitment to innovation with open development and open standards.

### **What is Mellanox Bringing to the Developer Congress?**

Mellanox will provide on-site expertise to discuss the capabilities of Mellanox Interconnect Solutions. Dror Goldenberg, VP of Software Architecture at Mellanox, will be present to further dive into areas of machine learning acceleration and the frameworks that already take advantage of Mellanox capabilities, such as Caffe, Chainer, TensorFlow, and others.

Mellanox is the interconnect leader in AI / cognitive computing data centers, and already accelerates machine learning frameworks to achieve from 2x to 18x speedup for image recognition, NLP, voice recognition, and more. The companys goal is to assist developers with their applications to achieve maximum scalability on POWER-based systems.

## **Xilinx Offers Experts in FPGAs and Machine Learning Algorithms**

### **Why [Xilinx](https://www.xilinx.com/) is Participating in the OpenPOWER Developer Congress?**

Xilinx, as a Platinum-level member of the OpenPOWER Foundation, looks forward to supporting the Foundations outreach activities. The company particularly likes the format of the upcoming OpenPOWER Developer Congress, because its focused on developers and provides many benefits developers will find helpful.

Xilinx appreciates the unique nature of the Congress, in that it provides developers the opportunity to get up close to the technology and in some cases, work on it directly. It also allows developers to make good connections with other companies who participate in the Congress — something that can be very beneficial as they return to their day-to-day work.

Companies that choose to participate by providing instruction at the Congress get an opportunity to talk with developers first hand, and receive feedback on their product offerings. Conversely, the developers have an opportunity to provide feedback on products and influence what platforms (everything OpenPOWER) are going to look like as they mature.

### **What is Xilinx bringing to the Developer Congress?**

Xilinx will be bringing system architects and solution architects who will work hands-on with developers to create solutions and solve problems. These experts understand both FPGAs and machine learning algorithms, which fits nicely with the OpenPOWER Developer Congress agenda.

@ -1,33 +0,0 @@
---
title: "Avnet Joins OpenPOWER Foundation"
date: "2015-01-15"
categories:
- "press-releases"
- "blogs"
---

PHOENIX, Jan 15, 2015 (BUSINESS WIRE) -- [Avnet, Inc](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=Avnet%2C+Inc&index=1&md5=40a05c1ec12025dc0539a7a8b4ef0803). (NYSE: [AVT](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fir.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=AVT&index=2&md5=65187ddc0108742fc13369e6a37bf5d8)), a leading global technology distributor, today announced that it has joined the OpenPOWER Foundation, an open development alliance based on IBMs POWER microprocessor architecture. Working with the OpenPOWER Foundation, Avnet will help partners and customers innovate across the full hardware and software stack to build customized server, networking and storage hardware solutions best suited to the high-performance Power architecture.

The OpenPOWER Foundation was established in 2013 as an open technical membership organization that provides a framework for open innovation at both the hardware and software levels. IBMs POWER8 processor serves as the hardware foundation, while the system software structure embraces key open source technologies including KVM, Linux and OpenStack.

“Working with the OpenPOWER Foundation complements Avnets long-standing relationship with IBM across the enterprise, from the components level to the data center,” said Tony Madden, Avnet senior vice president, global supplier business executive. “With the accelerated pace of change in technology, membership in the OpenPOWER Foundation provides an excellent avenue for us to work alongside other market leaders to deploy open Power technology, providing customers and partners with the technology infrastructure they need to evolve and grow their businesses.”

As an OpenPOWER Foundation member, Avnet will provide channel distribution and integration services for OpenPOWER compatible offerings, enabling its partners and customers to focus on innovation, optimizing operational efficiency and enhancing profitability.

[Click to Tweet](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fctt.ec%2FPia36&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=Click+to+Tweet&index=3&md5=e1f5619ff235ad8e0320d1e3b644bef6): .@Avnet joins #OpenPOWER Foundation [http://bit.ly/1ll33LR](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fbit.ly%2F1ll33LR&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=http%3A%2F%2Fbit.ly%2F1ll33LR&index=4&md5=13bd59e51076a54fb02f3151f471cea4)

Follow Avnet on Twitter: [@Avnet](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Ftwitter.com%2Favnet&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=%40Avnet&index=5&md5=e43111ddc5cf4e9c106917a235854dfe)

Connect with Avnet on LinkedIn or Facebook:[https://www.linkedin.com/company/avnet](http://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Favnet&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Favnet&index=6&md5=bbd3f4d589ef461e25d34e4d47b471e3) or [facebook.com/avnetinc](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.facebook.com%2FAvnetInc&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=facebook.com%2Favnetinc&index=7&md5=5308390807e1e243f6406e2d0b1cc2fa)

Read more about Avnet on its blogs: [http://blogging.avnet.com/weblog/mandablog/](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fblogging.avnet.com%2Fweblog%2Fmandablog%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=http%3A%2F%2Fblogging.avnet.com%2Fweblog%2Fmandablog%2F&index=8&md5=2c0e6a3e0270a00a96a936beb022ceae)

**About Avnet, Inc.**

Avnet, Inc. (NYSE: [AVT](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fir.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=AVT&index=9&md5=cacab694ead8f7e5a00e8889cb04f2fa)), a Fortune 500 company, is one of the largest distributors of electronic components, computer products and embedded technology serving customers globally. Avnet accelerates its partners success by connecting the worlds leading technology suppliers with a broad base of customers by providing cost-effective, value-added services and solutions. For the fiscal year ended June 28, 2014, Avnet generated revenue of $27.5 billion. For more information, visit[www.avnet.com](http://cts.businesswire.com/ct/CT?id=smartlink&url=http%3A%2F%2Fwww.avnet.com%2F&esheet=51019857&newsitemid=20150115005158&lan=en-US&anchor=www.avnet.com&index=10&md5=b5f2c37f3d7d641a5aaf2ef50d090012).

All brands and trade names are trademarks or registered trademarks, and are the properties of their respective owners. Avnet disclaims any proprietary interest in marks other than its own.

SOURCE: Avnet, Inc.

Avnet, Inc. Joal Redmond, +1 480-643-5528 [joal.redmond@avnet.com](mailto:joal.redmond@avnet.com) or Brodeur Partners, for Avnet, Inc. Marcia Chapman, +1 480-308-0284 [mchapman@brodeur.com](mailto:mchapman@brodeur.com)

@ -1,22 +0,0 @@
---
title: "Barcelona Supercomputing Center Adds HPC Expertise to OpenPOWER"
date: "2016-10-27"
categories:
- "blogs"
tags:
- "featured"
---

_Eduard Ayguadé, Computer Sciences Associate Director at BSC_

![Barcelona Supercomputing Center joins OpenPOWER](images/BSC-blue-large-1024x255.jpg)

The [Barcelona Supercomputing Center](https://www.bsc.es/) (BSC) is Spains National Supercomputing facility. Our mission is to investigate, develop and manage information technologies to facilitate scientific progress. It was officially constituted in April 2005 with four scientific departments: Computer Sciences, Computer Applications in Science and Engineering, Earth Sciences and Life Sciences. In addition, the Centers Operations department manages MareNostrum, one of the most powerful supercomputers in Europe. The activities in these departments are complementary to each other and very tightly related, setting up a multidisciplinary loop: computer architecture, programming models, runtime systems and resource managers, performance analysis tools, algorithms and applications in the above mentioned scientific and engineering areas.

Joining the OpenPOWER foundation will allow BSC to advance its mission, improving the way we contribute to the scientific and technological HPC community, and at the end, serve society. BSC plans to actively participate in the different working groups in OpenPOWER with the objective of sharing our research results, prototyping implementations and know-how with the other members to influence the design of future systems based on the POWER architecture. As member of OpenPOWER, BSC hopes to gain visibility and opportunities to collaborate with other leading institutions in high performance architectures, programming models and applications.

In the framework of the current [IBM-BSC Deep Learning Center](https://www.bsc.es/news/bsc-news/bsc-and-ibm-research-deep-learning-center-boost-cognitive-computing) initiative, BSC and IBM will collaborate in research and development projects on the Deep Learning domain, an essential component of cognitive computing, with focus on the development of new algorithms to improve and expand the cognitive capabilities of deep learning systems. Additionally, the center will also do research on flexible computing architectures fundamental for big data workloads like data centric systems and applications.

Researchers at BSC have been working on policies to optimally manage the hardware resources available in POWER-based systems from the runtime system, including prefetching, multithreading degree and energy-securing. These policies are driven by the information provided by the per-task (performance and power) counters available in POWER architectures and control knobs. Researchers at BSC have also been collaborating with the compiler teams at IBM in the implementation and evolution of the [OpenMP programming model](https://www.ibm.com/developerworks/community/groups/service/html/communitystart?communityUuid=8e0d7b52-b996-424b-bb33-345205594e0d) to support accelerators, evaluating new SKV (Scalable Key-Value) storage capabilities on top of novel memory and storage technologies, including bug reporting and fixing, using Smufin, one of the key applications at BSC to support personalized medicine, or exploring NUMA aware placement strategies in POWER architectures to deploy containers based on the workloads characteristics and system state.

Today, during the [OpenPOWER Summit Europe](https://openpowerfoundation.org/openpower-summit-europe/) in Barcelona, the director of BSC, Prof. Mateo Valero, will present the mission and main activities of the Center and the different departments at the national, European and international level. After that, he will present the work that BSC is conducting with different OpenPOWER members, including IBM, NVIDIA, Samsung, and Xilinx, with a special focus on the BSC and IBM research collaboration in the last 15 years.

@ -1,50 +0,0 @@
---
title: "Barreleye G2 and Zaius Motherboard Samples Showcased at the OpenPOWER Summit"
date: "2018-05-14"
categories:
- "blogs"
tags:
- "google"
- "rackspace"
- "openpower-summit"
- "barreleye"
- "zaius"
- "openpower-foundation"
---

By Adi Gangidi

\[caption id="attachment\_5438" align="aligncenter" width="267"\][![Barreleye G2 Accelerator server](images/barreleye-267x300.jpg)](https://openpowerfoundation.org/wp-content/uploads/2018/05/barreleye.jpg) Barreleye G2 Accelerator server\[/caption\]

Rackspace showcased brand new Zaius PVT motherboard samples and Barreleye G2 servers at the [OpenPOWER Summit](https://opfus2018.sched.com/event/E36g/accelerators-development-update-zaius-barreleye-g2), demonstrating industry leading capabilities.

## **Collaboration between Google and Rackspace**

The Zaius/Barreleye G2 OpenPOWER platform was originally [announced](https://blog.rackspace.com/first-look-zaius-server-platform-google-rackspace-collaboration) at the OpenPOWER Summit in 2016 as a collaborative effort between Google and Rackspace. Since then, we have made steady progress on the development of this platform. Weve navigated through engineering validation and test (EVT), design validation and test (DVT) and made various optimizations to the design resulting in refined solution.

We continue to [qualify](https://blog.rackspace.com/zaius-barreleye-g2-server-development-update-2) various OpenCAPI/NVLink 2.0 adapters and play with frameworks ([SNAP](https://github.com/open-power/snap)/[PowerAI](https://www.ibm.com/us-en/marketplace/deep-learning-platform)) that enable easy adoption of these adapters.

## **Zaius motherboard**

Our Zaius motherboard has just entered the production validation and test stage, which reflects our confidence in this design and our continued effort to bring OpenCAPI/NVLink 2.0/PCIe Gen4 accelerators to datacenters via this server housing IBM Power9 processors.

\[caption id="attachment\_5439" align="aligncenter" width="625"\][![PVT Zaius Motherboard](images/PVT-1024x651.png)](https://openpowerfoundation.org/wp-content/uploads/2018/05/PVT.png) PVT Zaius Motherboard\[/caption\]

## **CPU-GPU NVLink 2.0 Interposer Board**

Also at the OpenPOWER Summit, Rackspace displayed our unique, disaggregated implementation of CPU-GPU NVLink 2.0 interposer board. This board is ideal for artificial intelligence and deep learning applications.

Further, when combined with PCIe Gen4, we believe the Interposer Board will provide reference in the server industry for solving two bottlenecks:

1. The slow CPU-GPU link
2. Slow server-to-server network speed

Both bottlenecks are commonplace today in PCIe Gen3 servers.

\[caption id="attachment\_5440" align="aligncenter" width="625"\][![SlimSAS to SXM2 Interposer for support Volta GPU and FPGA HBM2 Card](images/SlimSAS-1024x445.jpg)](https://openpowerfoundation.org/wp-content/uploads/2018/05/SlimSAS.jpg) SlimSAS to SXM2 Interposer for support Volta GPU and FPGA HBM2 Card\[/caption\]

Conference attendees also saw first-in-industry technology demos from Rackspace, including a demo of the worlds first production-ready PCIe Gen4 NVM Express System. You can read about that [here](https://openpowerfoundation.org/blogs/openpower-pcie/).

Rackspace expects to do limited access customer betas later this year, based on Barreleye G2 Accelerator servers.

Customers interested in participating, please reach out by emailing [hardware-engineering@lists.rackspace.com](mailto:hardware-engineering@lists.rackspace.com).

@ -1,50 +0,0 @@
---
title: "Big Data and AI: Collaborative Research and Teaching Initiatives with OpenPOWER"
date: "2020-02-13"
categories:
- "blogs"
tags:
- "ibm"
- "power"
- "hpc"
- "big-data"
- "summit"
- "ai"
- "oak-ridge-national-laboratory"
---

[Arghya Kusum Das](https://www.linkedin.com/in/arghya-kusum-das-567a4761/), Ph.D., Asst. Professor, UW-Platteville

![](images/Blog-Post_2.19.20.png)

In the Department of Computer Science and Software Engineering (CSSE) at the University of Wisconsin at Platteville, I work closely with hardware system designers to improve the quality of the institutes research and teaching.

Recently, I have engaged with the OpenPOWER community to improve research efforts and also to help build collaborative education platforms.

## **Accelerating Research on POWER**

As a collaborative academic partner with the OpenPOWER Foundation, I have participated and led sessions at various OpenPOWER Academic workshops. These workshops gave me an opportunity to learn about various features around OpenPOWER and also provided great networking opportunities with many research organizations and customers.

As part of this, I submitted a research proposal to [Oak Ridge National Laboratory](https://www.ornl.gov/) for allocation in the Summit supercomputing cluster to accelerate my research. With this allocation, I focus on accurate, de novo assembly and binning of metagenomic sequences, which can become quite complex with multiple genomes in mixed sequence reads. The computation process is also challenged by the huge volume of the datasets.

Our assembly pipeline involves two major steps. First, a de Bruijn graph-based de novo assembly and second, binning the whole genomes to operational taxonomic units utilizing deep learning techniques. In conjunction with large data sets, these deep learning technologies and scientific methods for big data genome analysis demand more compute cycles per processor than ever before. Extreme I/O performance is also required.

The final goal of this project is to accurately assemble terabyte-scale metagenomic leveraging IBM Power9 technology along with Nvidia GPU and NVLink.

## **Building a Collaborative Future**

One of our collaborative visions is to spread the HPC education to meet the worldwide need for experts in corresponding fields. As a part of this vision, I recognized the importance of online education and started working on a pilot project to develop an innovative, online course curriculum for these cutting-edge domains of technology.

To further facilitate these visions, Im also working on developing a collaborative, online education platform where students can receive lectures and deepen their theoretical knowledge, but also get hands-on experience in cutting edge infrastructure.

Im interested in collaboration with bright minds including faculties, students and professionals to materialize this online education goal.

## **Future Workshops and Hackathons**

As a part of this collaborative initiative, I plan to organize big data workshops and hackathons, which will provide a forum for disseminating the latest research, as well as provide a platform for students to get hands-on learning and engage in practical discussion about big data and AI-based technologies.

The first of these planned events is the OpenPOWER Big Data and AI workshop taking place on April 7th, 2020. Attendees will hear about IBM and OpenPOWER partnerships, cutting-edge research on big data, AI, and HPC, including outreach, industry research, and other initiatives.

You can register for the workshop [**here**](https://www.uwplatt.edu/big-data-ai).

Cant wait to see you there!

@ -1,10 +0,0 @@
---
title: "Blog | IT powers new business models"
date: "2014-07-02"
categories:
- "blogs"
---

People and businesses today are rapidly adopting new technologies and devices that are transforming the way they interact with each other and their data.

This digital transformation generates 2.5 quintillion bytes of data associated with the proliferation of mobile devices, social media and cloud computing, and drives tremendous growth opportunity.

@ -1,16 +0,0 @@
---
title: "Members can now request early access to Tyan reference board"
date: "2014-07-10"
categories:
- "blogs"
tags:
- "openpower"
- "power8"
- "tyan"
- "atx"
- "debian"
---

![Tyan reference Board](images/Tyan-reference-Board-300x180.jpg) We are excited with the progress that the OpenPOWER Foundation member companies have made since our public launch in San Francisco back in April. Members can now request early access to the Tyan reference board shown below by emailing [Bernice Tsai](mailto:bernice.tsai@mic.com.tw) at Tyan. This is a single socket, ATX form factor, Power8 Motherboard that can members can bring up an [Debian Linux Distribution](https://wiki.debian.org/ppc64el) (Little Endian) to start innovating with. We look forward to seeing the great ideas that will be generated by working together!

Gordon

@ -1,24 +0,0 @@
---
title: "New OpenPOWER Member Brocade Showcases Work at Mobile World Congress"
date: "2016-02-19"
categories:
- "blogs"
tags:
- "featured"
---

_By Brian Larsen, Director, Partner Business Development, Brocade![logo-brocade-black-red-rgb](images/logo-brocade-black-red-rgb.jpg)_

In my 32 year career in the IT industry there has never been a better time to embrace the partnership needed to meet client requirements, needs and expectations.  Brocade has built its business on partnering with suppliers who deliver enterprise class infrastructure in all the major markets. This collaborative mindset is what led us to the OpenPOWER Foundation, where an eco-system of over 180 vendors, suppliers, and researchers can build new options for client solutions.

Brocade recognizes that OpenPOWER platforms are providing choice and with that choice comes the need to enable those platforms with the same networking capabilities that users are familiar with.  If you have been in a cave for the last eight years, you may not know that Brocade has broken out of its mold of being a fibre channel switch vendor and now supports a portfolio of IP networking platforms along with innovative solutions in Software Defined Networking (SDN) and Network Function Virtualization (NFV). Our work will allow our OpenPOWER partners to design end to end solutions that include both storage and IP networked solutions.  Use cases for specific industries can be developed for high-speed network infrastructure for M2M communication or compute to storage requirements.  As target use cases evolve, networking functionality could transform from a physical infrastructure to a virtual architecture where the compute platform is a critical & necessary component.

![OpenPOWER Venn Diagram](images/OpenPOWER-Venn-Diagram.jpg)

The OpenPOWER Foundations [membership has exploded](https://openpowerfoundation.org/membership/current-members/) since its inception and is clearly making a mark on new data center options for users who expect peak performance to meet todays demanding IT needs.  As Brocades SVP and GM of Software Networking, Kelly Herrell says, “OpenPOWER processors provide innovation that powers datacenter and cloud workloads”.  Enterprise Datacenters and Service Providers (SP) markets are key areas of focus for Brocade and by delivering on its [promise of the “New IP”](http://bit.ly/1Oiu13z) businesses will be able to transition to more automation, accelerated service delivery and new revenue opportunities.

Brocade will be at [Mobile World Congress](https://www.mobileworldcongress.com/) in Barcelona and [IBMs InterConnect Conference](http://ibm.co/1KsWIzQ) in Las Vegas from February 22-25th, come see us and let us show you the advantages of being an eco-system partner with us.

* * *

_![Brian Larsen Brocade](images/Brian-Larsen-Brocade-150x150.jpg)Brian Larsen joined Brocade in July 1991 and has more than 29 years of professional experience in high-end processing, storage, disaster recovery, Cloud, virtualization and networking environments. Larsen is the Director of Partner Business Development Executive responsible for solution and business development within all IBM divisions. For the last 5 years, he has focused on both service provider and enterprise markets with specific focus areas in: Cloud, Virtualization, Software Defined Networking (SDN), Network Function Virtualization (NFV), Software Defined Storage (SDS) and Analytics solutions._

@ -1,8 +0,0 @@
---
title: "Canonical Supporting IBM POWER8 for Ubuntu Cloud, Big Data"
date: "2014-06-27"
categories:
- "blogs"
---

If Ubuntu Linux is to prove truly competitive in theOpenStack cloud and Big Data worlds, it needs to run on more than x86 hardware. And that's whatCanonical achieved this month, with the announcement of full support for IBM POWER8machines on Ubuntu Cloud and Ubuntu Server.

@ -1,81 +0,0 @@
---
title: "Using CAPI and Flash for larger, faster NoSQL and analytics"
date: "2015-09-25"
categories:
- "blogs"
tags:
- "openpower"
- "power8"
- "featured"
- "capi"
- "big-data"
- "databases"
- "ubuntu"
- "redis-labs"
- "capi-series"
---

_By Brad Brech, Distinguished Engineer, IBM Power Systems Solutions_

## [![CAPI Flash Benefits Infographic](images/CAPI_Flash_Infographic-475x1024.jpg)](http://ibm.co/1FxOPq9)Business Challenge

Suppose youre a game developer with a release coming up. If things go well, your user base could go from zero to hundreds of thousands in no time. And these gamers expect your app to capture and store their data, so the game always knows who's playing and their progress in the game, no matter where they log in. Youre implementing an underlying database to serve these needs.

Oh—and youve got to do that without adding costly DRAM to existing systems, and without much of a budget to build a brand-new large shared memory or distributed multi-node database solution. Dont forget that you cant let your performance get bogged down with IO latency from a traditionally attached flash storage array.

More and more, companies are choosing NoSQL over traditional relational databases. NoSQL offers simple data models, scalability, and exceptionally speedy access to in-memory data. Of particular interest to companies running complex workloads is NoSQL's high availability for key value stores (KVS) like [Redis](https://redislabs.com/solutions-redis-labs-on-power) and MemcacheDB, document stores such as mongoDB and couchDB, and column stores Cassandra and BigTable.

## Computing Challenge

NoSQL isn't headache-free.

Running NoSQL workloads fast enough to get actionable insights from them is expensive and complex. That requires your business either to invest heavily in a shared-memory system or to set up a multi-node networked solution that adds complexity and latency when accessing your valuable data.

Back to our game developer and their demanding gamers. As the world moves to the cloud, developers need to offer users rapid access to online content, often tagged with metadata. Metadata needs low response times as it is constantly being accessed by users. NoSQL provides flexibility for content-driven applications to not only provide fast access to data but also store diverse data sets. That makes our game developer an excellent candidate for using CAPI-attached Flash to power a NoSQL database.

## The Solution

Here's where CAPI comes in. Because CAPI allows you to attach devices with memory coherency at incredibly low latency, you can use CAPI to affix flash storage that functions more like extended block system memory for larger, faster NoSQL. Coming together, OpenPOWER Foundation technology innovators including [Redis Labs](https://redislabs.com/solutions-redis-labs-on-power), [Canonical](https://insights.ubuntu.com/2014/10/10/ubuntu-with-redis-labs-altera-and-ibm-power-supply-new-nosql-data-store-solution/), and [IBM](http://ibm.co/1FxOPq9) created this brilliant new deployment model, and they built [Data Engine for NoSQL](http://ibm.co/1FxOPq9)—one of the first commercially available CAPI solutions.

CAPI-attached flash enables great things. By CAPI-attaching a 56 TB flash storage array to the POWER8 CPU via an FPGA, the application gets direct access to a large flash array with reduced I/O latency and overhead compared to standard I/O-attached flash. End-users can:

- _Create a fast path to a vast store of memory_
- _Reduce latency by cutting the number of code instructions to retrieve data from 20,000 to as low as 2000, by eliminating I/O overhead[1](#_ftn1)_
- _Increase performance by increasing bandwidth by up to 5X on a per-thread basis[1](#_ftn1)_
- _Lower deployment costs by 3X through massive infrastructure consolidation[2](#_ftn2)_
- _Cut TCO with infrastructure consolidation by shrinking the number of nodes needed from 24 to 1[2](#_ftn2)_

<iframe src="https://www.youtube.com/embed/cCmFc_0xsvA?rel=0&amp;showinfo=0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

## Get Started with Data Engine for NoSQL

Getting started is easy, and our goal is to provide you with the resources you need to begin. This living list will continue to evolve as we provide you with more guidance, information, and use cases, so keep coming back to be sure you can stay up to date.

### Learn more about the Data Engine for NoSQL:

- [Data Engine for NoSQL Solution Brief](http://ibm.co/1KTPS44)
- [Data Engine for NoSQL Whitepaper](http://ibm.co/1izYfXN)

### Deploy Data Engine for NoSQL:

- [Contact IBM about Data Engine for NoSQL](http://ibm.co/1FxOPq9) to build the Data Engine for NoSQL configuration for you
- [Get community support](http://ibm.co/1VeInq6) for your solutions and share results with your peers on the [CAPI Developer Community](http://ibm.co/1VeInq6)
- Reach out to the OpenPOWER Foundation community on [Twitter](https://twitter.com/intent/tweet?screen_name=OpenPOWERorg&text=CAPI-Flash%20enables%20me%20to), [Facebook](https://www.facebook.com/openpower), and [LinkedIn](https://www.linkedin.com/grp/home?gid=7460635) along the way

Keep coming to see blog posts from IBM and other OpenPOWER Foundation partners on how you can use CAPI to accelerate computing, networking and storage.

- [CAPI Series 1: Accelerating Business Applications in the Data-Driven Enterprise with CAPI](https://openpowerfoundation.org/blogs/capi-drives-business-performance/)
- [CAPI Series 3: Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI](https://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/)
- [CAPI Series 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/)

 

* * *

**_![BradBrech](images/BradBrech.jpg)About Brad Brech_**

_Brad Brech is a Distinguished Engineer and the CTO of POWER Solutions in the IBM Systems Division. He is currently focused on POWER and OpenPOWER and solutions and is the Chief Engineer for the CAPI attached Flash solution enabler. His responsibilities include technical strategy, solution identification, and working delivery strategies with solutions teams. Brad is an IBM Distinguished Engineer, a member of the IBM Academy of Technology and past Board member of The Green Grid._

[\[1\]](#_ftnref1)Based on performance analysis comparing typical I/O Model flow (PCIe) to CAPI Attached Coherent Model flow.

[\[2\]](#_ftnref2) Based on competitive system configuration cost comparisons by IBM and Redis Labs.

@ -1,75 +0,0 @@
---
title: "Accelerating Business Applications in the Data-Driven Enterprise with CAPI"
date: "2015-09-10"
categories:
- "blogs"
tags:
- "openpower"
- "power"
- "featured"
- "capi"
- "acceleration"
- "fpga"
- "performance"
- "capi-series"
---

_By Sumit Gupta, VP, HPC & OpenPOWER Operations at IBM_ _This blog is part of a series:_ _[Pt 2: Using CAPI and Flash for larger, faster NoSQL and analytics](https://openpowerfoundation.org/blogs/capi-and-flash-for-larger-faster-nosql-and-analytics/)_ _[Pt 3: Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI](https://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/)_ _[Pt 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/)_

Every 48 hours, the world generates as much data as it did from the beginning of recorded history through 2003.

The monumental increase in the flow of data represents an untapped source of insight for data-driven enterprises, and drives increasing pressure on computing systems to endure and analyze it. But today, just raising processor speeds isn't enough. The data-driven economy demands a computing model that delivers equally data-driven insights and breakthroughs at the speed the market demands.

[![CAPI Logo](images/CAPITechnology_Color_Gradient_Stacked_-300x182.png)](http://ibm.co/1MVbP5d)OpenPOWER architecture includes a technology called Coherent Accelerator Processor Interface (CAPI) that enables systems to crunch through the high volume of data by bringing compute and data closer together. CAPI is an interface that enables close integration of devices with the POWER CPU and gives coherent access to system memory. CAPI allows system architects to deploy acceleration in novel ways for an application and allow them to rethink traditional system designs.

\[caption id="attachment\_1982" align="aligncenter" width="625"\][![CAPI-attached vs. traditional acceleration](images/IBMNR_OPF_CAPI_BlogPost1_Image-02-1024x531.jpg)](http://ibm.co/1MVbP5d) CAPI allows attached accelerators to deeply integrate with POWER CPUs\[/caption\]

CAPI-attached acceleration has three pillars: accelerated computing, accelerated storage, and accelerated networking. Connected coherently to a POWER CPU to give them direct access to the CPUs system memory, these techniques leverage accelerators like FPGAs and GPUs, storage devices like flash, and networking devices like Infiniband.   These devices, connected via CAPI, are programmable using simple library calls that enable developers to modify their applications to more easily take advantage of accelerators, storage, and networking devices. The CAPI interface is available to members of the OpenPOWER foundation and other interested developers, and enables a rich ecosystem of data center technology providers to integrate tightly with POWER CPUs to accelerate applications.

## **What can CAPI do?**

CAPI has had an immediate effect in all kinds of industries and for all kinds of clients:

- **[Healthcare](http://bit.ly/1WiV6KD):** Create customized cancer treatment plans personalized to an individuals unique genetic make-up.
- **Image and video processing:** Facial expression recognition that allows retailers to analyze the facial reactions their shoppers have to their products.
- [**Database acceleration and fast storage**](http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=PM&subtype=SP&htmlfid=POS03135USEN&appname=TAB_2_2_Appname#loaded)**:** accelerate the performance of flash storage to allow users to search databases in near real-time for a fraction of the cost.
- **[Risk Analysis in Finance](http://bit.ly/1N7UQMY):** Allow financial firms to monitor their risk in real-time with greater accuracy.

## **The CAPI Advantage**

CAPI can be used to:

- **Accelerate Compute** by leveraging a CAPI-attached FPGA to run, for example, Monte Carlo analysis or perform Vision Processing. The access to the IBM POWER CPUs memory address space is a programmer's dream.
- **Accelerate Storage** by using CAPI to attach flash that can be written to as a massive memory space instead of storage---a process that slashes latency compared to traditional storage IO.
- **Accelerate Networking** by deploying CAPI-attached network accelerators for faster, lower latency edge-of-network analytics.

The intelligent and close integration enabled by CAPI with IBM POWER CPUs removes much of the latency associated with the I/O bus on other platforms (PCI-E). It also makes the accelerator a peer to the POWER CPU cores, allowing it to access the accelerator natively.  Consequently, a very small investment can help your system perform better than ever.

https://www.youtube.com/watch?v=h1SE48\_aHRo

## **Supported by the OpenPOWER Foundation Community**

We often see breakthroughs when businesses open their products to developers, inviting them to innovate. To this end IBM helped create the OpenPOWER Foundation, now with 150 members, dedicated to innovating around the POWER CPU architecture.

IBM and many of our Foundation partners are committed to developing unique, differentiated solutions leveraging CAPI. Many more general and industry-specific solutions are on the horizon. By bringing together brilliant minds from our community of innovators, the possibilities for customers utilizing CAPI technology are endless.

## **Get Started with CAPI**

Getting started with CAPI is easy, and our goal is to provide you with the resources you need to begin. This living list will continue to evolve as we provide you with more guidance, information, and use cases, so keep coming back to be sure you can stay up to date.

1. Learn more about CAPI:
- [Coherent Accelerator Processor Interface (CAPI) for POWER8 Systems](http://ibm.co/1MVbP5d)
2. Get the developer kits:
- [Alpha Data CAPI Developer Kit](http://bit.ly/1F1hzqW)
- [Nallatech CAPI Developer Kit](http://bit.ly/1OieWTK)
3. Get support for your solutions and share results with your peers on the [CAPI Developer Community](http://ibm.co/1XSQtZC)

Along the way reach out to us on [Twitter](https://twitter.com/OpenPOWERorg), [Facebook](https://www.facebook.com/openpower?fref=ts), and [LinkedIn](https://www.linkedin.com/grp/home?gid=7460635).

_This blog is part of a series:_ _[Pt 2: Using CAPI and Flash for larger, faster NoSQL and analytics](https://openpowerfoundation.org/blogs/capi-and-flash-for-larger-faster-nosql-and-analytics/)_ _[Pt 3: Interconnect Your Future with Mellanox 100Gb EDR Interconnects and CAPI](https://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/)_ _[Pt 4: Accelerating Key-value Stores (KVS) with FPGAs and OpenPOWER](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/)_

* * *

**_[![Sumit Gupta](images/sumit-headshot.png)](https://openpowerfoundation.org/wp-content/uploads/2015/09/sumit-headshot.png)About Sumit Gupta_**

_Sumit Gupta is Vice President, High Performance Computing (HPC) Business Line Executive and OpenPOWER Operations. With more than 20 years of experience, Sumit is a recognized industry expert in the fields of HPC and enterprise data center computing. He is responsible for business management of IBM's HPC business and for operations of IBM's OpenPOWER initiative._

@ -1,74 +0,0 @@
---
title: "CAPI SNAP: The Simplest Way for Developers to Adopt CAPI"
date: "2016-11-03"
categories:
- "capi-series"
- "blogs"
tags:
- "featured"
---

_By Bruce Wile, CAPI Chief Engineer and Distinguished Engineer, IBM Power Systems_

Last week at OpenPOWER Summit Europe, [we announced a brand-new Framework](https://openpowerfoundation.org/blogs/openpower-makes-fpga-acceleration-snap/) designed to make it easy for developers to begin using CAPI to accelerate their applications. The CAPI Storage, Network, and Analytics Programming Framework, or CAPI SNAP, was developed through a multi-company effort from OpenPOWER members and is now in alpha testing with multiple early adopter partners.

But what exactly puts the “snap” in CAPI SNAP? To answer that, I wanted to give you all a deeper look into the magic behind CAPI SNAP.  The framework extends the CAPI technology through the simplification of both the API (call to the accelerated function) and the coding of the accelerated function.  By using CAPI SNAP, your application gains performance through FPGA acceleration and because the compute resources are closer to the vast amounts of data.

## A Simple API

ISVs will be particularly interested in the programming enablement in the framework. The framework API makes it a snap for an application to call for an accelerated function. The innovative FPGA framework logic implements all the computer engineering interface logic, data movement, caching, and pre-fetching work—leaving the programmer to focus only on the accelerator functionality.

Without the framework, an application writer must create a runtime acceleration library to perform the tasks shown in Figure 1.

\[caption id="attachment\_4299" align="aligncenter" width="762"\]![Figure 1: Calling an accelerator using the base CAPI hardware primitives](images/CAPI-SNAP-1.png) Figure 1: Calling an accelerator using the base CAPI hardware primitives\[/caption\]

But now with CAPI SNAP, an application merely needs to make a function call as shown in Figure 2. This simple API has the source data (address/location), the specific accelerated action to be performed, and the destination (address/location) to send the resulting data.

\[caption id="attachment\_4300" align="aligncenter" width="485"\]![Figure 2: Accelerated function call with CAPI SNAP](images/CAPI-SNAP-2.png) Figure 2: Accelerated function call with CAPI SNAP\[/caption\]

The framework takes care of moving the data to the accelerator and putting away the results.

## Moving the Compute Closer to the Data

The simplicity of the API parameters is elegant and powerful. Not only can source and destination addresses be coherent system memory locations, but they can also be attached storage, network, or memory addresses. For example, if a framework card has attached storage, the application could source a large block (or many blocks) of data from storage, perform an action such as a search, intersection, or merge function on the data in the FPGA, and send the search results to a specified destination address in main system memory. This method has large performance advantages compared to the standard software method as shown in Figure 3.

\[caption id="attachment\_4301" align="aligncenter" width="625"\]![Figure 3: Application search function in software (no acceleration framework)](images/CAPI-SNAP-3-1024x538.png) Figure 3: Application search function in software (no acceleration framework)\[/caption\]

Figure 4 shows how the source data flows into the accelerator card via the QSFP+ port, where the FPGA performs the search function. The framework then forwards the search results to system memory.

\[caption id="attachment\_4302" align="aligncenter" width="625"\]![Figure 4: Application with accelerated framework search engine](images/CAPI-SNAP-4-1024x563.png) Figure 4: Application with accelerated framework search engine\[/caption\]

The performance advantages of the framework are twofold:

1. By moving the compute (in this case, search) closer to the data, the FPGA has a higher bandwidth access to storage.
2. The accelerated search on the FPGA is faster than the software search.

Table 1 shows a 3x performance improvement between the two methods. By moving the compute closer to the data, the FPGA has a much higher ingress (or egress) rate versus moving the entire data set into system memory.

\[table id=19 /\]

## Simplified Programming of Acceleration Actions

The programming API isnt the only simplification in CAPI SNAP. The framework also makes it easy to program the “action code” on the FPGA. The framework takes care of retrieving the source data (whether its in system memory, storage, networking, etc) as well as sending the results to the specified destination. The programmer, writing in a high-level language such as C/C++ or Go, needs only to focus on their data transform, or “action.” Framework compatible compilers translate the high-level language to Verilog, which in turn gets synthesized using Xilinxs Vivado toolset.

With CAPI SNAP, the accelerated search code (searching for one occurrence) is this simple:

for(i=0; i < Search.text\_size; i++){

                                  if ((buffer\[i\] == Search.text\_string)) {

                                                Search.text\_found\_position = i;

                                 }

                 }

The open source release will include multiple, fully functional example accelerators to provide users with the starting points and the full port declarations needed to receive source data and return destination data.

## Make CAPI a SNAP

Are you looking to explore CAPI SNAP for your organizations own data analysis? Then apply to be an early adopter of CAPI SNAP by emailing us directly at [capi@us.ibm.com](mailto:capi@us.ibm.com). Be sure to include your name, organization, and the type of accelerated workloads youd like to explore with CAPI SNAP.

You can also read more about CAPI and its capabilities in the accelerated enterprise in our [CAPI series on the OpenPOWER Foundation blog](https://openpowerfoundation.org/blogs/capi-drives-business-performance/).

You will continue to see a drumbeat of activity around the framework, as we release the source code and add more and more capabilities in 2017.

@ -1,36 +0,0 @@
---
title: "Indias Centre for Development of Advanced Computing Joins OpenPOWER to Spread HPC Education"
date: "2016-03-07"
categories:
- "blogs"
---

_By Dr. VCV Rao and Mr. Sanjay Wandheker_

[![CDAC Logo](images/cdac.preview-300x228.png)](https://openpowerfoundation.org/wp-content/uploads/2016/03/cdac.preview.png)

An open ecosystem relies on collaboration to thrive, and at the Centre for Advanced Computing (C-DAC), we fully embrace that belief.

C-DAC is a pioneer in several advanced areas of IT and electronics, and has always been a proactive supporter of technology innovation. It is currently engaged in several national ICT (Information and Communication Technology) projects of critical value to the Indian and global nations, and C-DACs thrust on technology innovation has led to the creation of an ecosystem for the coexistence of multiple technologies, today, on a single platform.

## Driving National Technology Projects

Within this ecosystem, C-DAC is working on strengthening national technological capabilities in the context of global developments around advanced technologies like high performance computing and grid computing, multilingual computing, software technologies, professional electronics, cybersecurity and cyber forensics, and health informatics.

C-DAC is also focused on technology education and training, and offers several degree programs including our HPC-focused _C-DAC Certified HPC Professional Certification Programme (CCHPCP)_. We also provide advanced computing diploma programs through the Advanced Computing Training Schools (ACTS) located all over India.

One of C-DACs critical projects is the “National Supercomputing Mission (NSM): Building Capacity and Capability“, the goal of which is to create a shared environment of the advancements in information technology and computing which impact the way people lead their lives.

## Partnering with OpenPOWER

[![CDACStudents](images/maxresdefault-1024x768.jpg)](https://openpowerfoundation.org/wp-content/uploads/2016/03/maxresdefault.jpg)

The OpenPOWER Foundation makes for an excellent partner in this effort, and through our collaboration, we hope to further strengthen supercomputing access and education by leveraging the OpenPOWER Foundations growing ecosystem and technology. And with OpenPOWER, we will develop and refine HPC coursework and study materials to skill the next generation of HPC programmers on OpenPOWER platforms with GPU accelerators.

In addition, CDAC is eager to explore the potential of OpenPOWER hardware and software in addressing some of our toughest challenges. OpenPOWER offers specific technology features in HPC research which include IBM XLF Compliers, ESSL libraries, hierarchical memory features with good memory bandwidth per socket, IO bandwidth, the CAPI interfaces with performance gain over PCIe and the potential of POWER8/9 with NVIDIA GPUs. These OpenPOWER innovations will provide an opportunity to understand performance gains for a variety of applications in HPC and Big Data.

## Come Join Us

Were very eager to move forward, focusing on exposure to new HPC tools on OpenPOWER-driven systems. CDAC plans to be an active member of the OpenPOWER community by making HPC software for science and engineering applications an open source implementation available on OpenPOWER systems with GPU acceleration.

To learn more about CDAC and to get involved in our work with OpenPOWER, visit us online at [www.cdac.in](http://www.cdac.in). If you would like to learn more about our educational offerings and coursework, go to [http://bit.ly/1Sgp4ix](http://bit.ly/1Sgp4ix).

@ -1,22 +0,0 @@
---
title: "Center of Accelerated Application Readiness: Preparing applications for Summit"
date: "2015-03-18"
categories:
- "blogs"
---

### Abstract

The hybrid CPU-GPU architecture is one of the main tracks for dealing with the power limitations imposed on high performance computing systems. It is expected that large leadership computing facilities will, for the foreseeable future, deploy systems with this design to address science and engineering challenges for government, academia, and industry. Consistent with this trend, the U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to the Oak Ridge National Laboratory (ORNL) in 2017. This new supercomputer, named Summit, will provide on science applications at least five times the performance of Titan, the OLCF's current hybrid CPU+GPU leadership system, and become the next peak in leadership-class computing systems for open science. In response to a call for proposals, the OLCF has selected and will be partnering with science and engineering application development teams for the porting and optimization of their applications and carrying out a science campaign at scale on Summit.

### Speaker Organization

National Center for Computational Sciences Oak Ridge National Laboratory Oak Ridge, TN, USA

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/04/20150318-GTC.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/04/20150318-GTC.pdf)

Back to Summit Details

@ -1,29 +0,0 @@
---
title: "Chelsio Joins OpenPOWER Foundation"
date: "2014-11-06"
categories:
- "press-releases"
- "blogs"
---

SUNNYVALE, Calif., Nov. 6, 2014 ­/PRNewswire/ -- Chelsio Communications, the leading provider of 40Gb Ethernet (40GbE) Unified Wire Adapters and ASICs, announced today that it has joined the OpenPOWER Foundation, expanding the open technical community for collaboration on the POWER architecture.

"We are proud to join the growing members of the OpenPOWER Foundation, in the open development of the POWER systems architecture. Chelsio has long been at the forefront of advanced networking ASIC design, from its first POWER systems design wins through to today's leading Terminator 5 (T5) Unified Wire adapters," said Kianoosh Naghshineh, CEO, Chelsio Communications.

With its proven Terminator ASIC technology designed in more than 100 OEM platforms and the successful deployment of more than 750,000 ports, Chelsio has enabled unified wire solutions for LAN, SAN and cluster traffic. With its unique ability to fully offload TCP, iSCSI, FCoE and iWARP RDMA protocols on a single chip, Chelsio adapter cards remove the burden of communications responsibilities and processing overhead from servers and storage systems, resulting in a dramatic increase in application performance. The added advantage of traffic management and quality of service (QoS) running at 40Gbps line rate ensures today's Big Data, Cloud, and enterprise data centers run efficiently at high performance.

"The OpenPOWER Foundation is changing the game, driving innovation and ultimately offering more choices in the industry," said Brad McCredie, President, OpenPOWER Foundation. "We look forward to the participation of Chelsio Communications and their contributions toward creating innovative and winning solutions based on POWER architecture."

Chelsio T5 Unified Wire Adapters Chelsio Unified Wire Adapters, based upon the fifth generation of its high performance Terminator (T5) ASIC, are designed for data, storage and high performance clustering applications.

Read more about the [Chelsio T5 Unified Wire Adapters.](http://www.chelsio.com/nic/t5-unified-wire-adapters/)

About Chelsio Communications, Inc. Chelsio Communications is leading the convergence of networking, storage and clustering interconnects and I/O virtualization with its robust, high-performance and proven Unified Wire technology. Featuring a highly scalable and programmable architecture, Chelsio is shipping multi-port 10 Gigabit Ethernet (10GbE) and 40GbE adapter cards, delivering the low latency and superior throughput required for high-performance compute and storage applications. For more information, visit the company online at www.chelsio.com.

All product and company names herein are trademarks of their registered owners.

Logo - [http://photos.prnewswire.com/prnh/20130611/SF30203LOGO](http://photos.prnewswire.com/prnh/20130611/SF30203LOGO)

SOURCE Chelsio Communications

RELATED LINKS [http://www.chelsio.com](http://www.chelsio.com)

@ -1,20 +0,0 @@
---
title: "China POWER Technology Alliance (CPTA)"
date: "2015-01-19"
categories:
- "blogs"
---

### Objective

The objective is to position China POWER Technology Alliance (CPTA) as a mechanism to help global OpenPOWER Foundation members engage with China organizations on POWER-based implementations in China.

### Abstract

OpenPOWER ecosystem has grown fast in China Market with 12 OPF members growth in 2014. China POWER Technology Alliance was established in Oct. 2014, led by China Ministry of Industry and Information Technology (MIIT), in order to accelerate the speed of China secured and trusted IT industry chain building, by leveraging OpenPOWER Technology. This presentation is for the purpose of linking up CPTA and OPF global members, to help global OPF member to use CPTA as a stepping stone to go into China market. This presentation will focus on explaining to the global OPF members WHY they should come to China, and above all, HOW to come to China, and WHAT support services CPTA will provide to the global OPF members. Itll also create a clarity between CPTA and OPF in China, for OPF members to leverage CPTA as a (non-mandatory) on-ramp to China.

### Speaker

Zhu Ya Dong (to be confirmed), Chairman of PowerCore, China, Platinum Member of OpenPOWER Foundation

[Back to Summit Details](2015-summit/)

@ -1,29 +0,0 @@
---
title: "Cirrascale® Joins OpenPOWER™ Foundation, Announces GPU-Accelerated POWER8®-Based Multi-Device Development Platform"
date: "2015-03-20"
categories:
- "press-releases"
- "blogs"
tags:
- "featured"
---

### The Cirrascale RM4950 4U POWER8-based development platform, with Cirrascale SR3514 PCIe switch riser, enables up to four NVIDIA Tesla GPU Accelerators or other compatible PCIe Gen 3.0 devices.

![gI_90416_RM4950_SideView_PR](images/gI_90416_RM4950_SideView_PR.png)Cirrascale Corporation®, a premier developer of build-to-order, open architecture blade-based and rackmount computing infrastructure, today announced its membership within the OpenPOWER™ Foundation and the release of its RM4950 development platform, based on the IBM® POWER8® 4-core Turismo SCM processor, and designed with NVIDIA® Tesla® GPU accelerators in mind. The new POWER8-based system provides a solution perfectly aligned to support GPU-accelerated big data analytics, deep learning, and scientific high-performance computing (HPC) applications.

“As Cirrascale dives deeper into supporting more robust installations of GPU-accelerated applications, like those used in big data analytics and deep learning, were finding customers rapidly adopting disruptive technologies to advance their high-end server installations,” said David Driggers, CEO, Cirrascale Corporation. “The RM4950 POWER8-based server provides a development platform unique to the marketplace that has the ability to support multiple PCIe devices on a single root complex while enabling true scalable performance of GPU-accelerated applications.”

The secret sauce of the RM4950 development platform lies with the companys 80-lane Gen3 PCIe switch-enabled riser, the Cirrascale SR3514. It has been integrated into several recent product releases to create an extended PCIe fabric supporting up to four NVIDIA Tesla GPU accelerators, or other compatible PCIe devices, on a single PCIe root complex.

“Cirrascales new servers enable enterprise and HPC customers to take advantage of GPU acceleration with POWER CPUs,” said Sumit Gupta, general manager of Accelerated Computing at NVIDIA. “The servers support multiple GPUs, which dramatically enhances performance for a range of applications, including data analytics, deep learning and scientific computing.”

The system is the first of its type for Cirrascale as a new member of the OpenPOWER Foundation. The company joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technologies as well as industry leading open source software aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers. The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform.

“The Cirrascale RM4950 4U POWER8-based development platform is great example of how new advancements are made possible through open collaboration,” said Ken King, General Manager of OpenPOWER Alliances. “Our OpenPOWER Foundation members are coming together to create meaningful disruptive technologies, providing the marketplace with unique solutions to manage todays big data needs.”

The Cirrascale RM4950 development platform is the first of the companys POWER8-based reference systems with plans for production environment systems being announced later this year. The current development platform is immediately available to order and will be shipping in volume in Q2 2015. Licensing opportunities will also be available immediately to both customers and partners.

About Cirrascale Corporation Cirrascale Corporation is a premier provider of custom rackmount and blade server solutions developed and engineered for todays conventional data centers. Cirrascale leverages its patented Vertical Cooling Technology, engineering resources, and intellectual property to provide the industry's most energy-efficient standards-based platforms with the lowest possible total cost of ownership in the densest form factor. Cirrascale sells to large-scale infrastructure operators, hosting and managed services providers, cloud service providers, government, higher education, and HPC users. Cirrascale also licenses its award winning technology to partners globally. To learn more about Cirrascale and its unique data center infrastructure solutions, please visit [http://www.cirrascale.com](http://www.prweb.net/Redirect.aspx?id=aHR0cDovL3d3dy5jaXJyYXNjYWxlLmNvbQ==) or call (888) 942-3800.

Cirrascale and the Cirrascale logo are trademarks or registered trademarks of Cirrascale Corporation. NVIDIA and Tesla are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. IBM, POWER8, and OpenPOWER are trademarks or registered trademarks of International Business Machines Corporation in the U.S. and other countries. All other names or marks are property of their respective owners.

@ -1,37 +0,0 @@
---
title: "Clarkson University Joins OpenPOWER Foundation"
date: "2017-03-21"
categories:
- "press-releases"
- "blogs"
tags:
- "featured"
---

# 3.21.17

# Clarkson University Joins OpenPOWER Foundation

Clarkson University has joined the OpenPOWER Foundation, an open development community based on the POWER microprocessor architecture.   

![IBM POWER8 Processor](images/openpower-300.jpg)POWER CPU denotes a series of high-performance microprocessors designed by IBM.

Clarkson joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology as well as industry leading open source software aimed at delivering more choice, control and flexibility to developers of next-generation, hyper-scale and cloud data centers.

The group makes POWER hardware and software available to open development for the first time, as well as making POWER intellectual property licensable to others, greatly expanding the ecosystem of innovators on the platform.

With the POWER hardware and software, the researchers at Clarkson, especially the faculty in the Wallace H. Coulter School of Engineering's Department of Electrical & Computer Engineering, will be able to elevate their research in multicore/multithreading architectures, the interaction between system software and micro-architecture, and hardware acceleration techniques based on the POWER microprocessor architecture. The Clarkson faculty intend to join the OpenPOWER Foundation's hardware architecture, system software, and hardware accelerator workgroups.

"As a member of the OpenPOWER Foundation, we will be able to explore the state-of-the-art hardware and software design used in supercomputer and cloud computing platforms, as well as collaborating with researchers from industry and other institutions," said Assistant Professor of Electrical & Computer Engineering Chen Liu, who is leading the Computer Architecture and Microprocessor Engineering Laboratory at Clarkson.

"The development model of the OpenPOWER Foundation is one that elicits collaboration and represents a new way in exploiting and innovating around processor technology," says OpenPOWER Foundation President Bryan Talik. "With the Power architecture designed for Big Data and Cloud, new OpenPOWER Foundation members like Clarkson University will be able to add their own innovations on top of the technology to create new applications that capitalize on emerging workloads."

To learn more about OpenPOWER and view the complete list of current members, visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org/).  

Clarkson University educates the leaders of the global economy. One in five alumni already leads as an owner, CEO, VP or equivalent senior executive of a company. With its main campus located in Potsdam, N.Y., and additional graduate program and research facilities in the Capital Region and Beacon, New York, Clarkson is a nationally recognized research university with signature areas of academic excellence and research directed toward the world's pressing issues. Through more than 50 rigorous programs of study in engineering, business, arts, education, sciences and the health professions, the entire learning-living community spans boundaries across disciplines, nations and cultures to build powers of observation, challenge the status quo, and connect discovery and innovation with enterprise.

**Photo caption: IBM POWER8 Processor.**

**\[A photograph for media use is available at [http://www.clarkson.edu/news/photos/openpower.jpg](http://clarkson.edu/news/photos/openpower.jpg).\]**

\[News directors and editors: For more information, contact Michael P. Griffin, director of News & Digital Content Services, at 315-268-6716 or [mgriffin@clarkson.edu](mailto:mgriffin@clarkson.edu).\]

@ -1,57 +0,0 @@
---
title: "Diversify Cloud Computing Services on OpenPOWER with NECs Resource Disaggregated Platform for POWER8 and GPUs"
date: "2016-05-24"
categories:
- "blogs"
tags:
- "featured"
---

_By Takashi Yoshikawa and Shinji Abe, NEC Corporation_

The Resource Disaggregated (RD) Platform expands the use of cloud data centers in not only office applications, but also high performance computing (HPC) with the ability to simultaneously handle multiple demands for data storage, networks, and numerical/graphics processes. The RD platform performs computation by allocating devices from a resource pool at the device level to scale up individual performance and functionality.

Since the fabric is [ExpEther](http://www.expether.org/index.html), open standard hardware and software can be utilized to build custom computer systems that deliver faster, more powerful, and more reliable computing solutions effectively to meet the growing demand for performance and flexibility.

## Resource Disaggregated Computing Platform

The figure shown below is the RD computing platform at Osaka University. In use since 2013, it provides GPU computing power for university students and researchers at Osaka University and other universities throughout Japan.

![NEC RDCP 1](images/NEC-RDCP-1-1024x469.png)

The most differentiating point of the system is that the computing resource are custom-configured by attaching necessary devices in the standard PCIe level, meaning you can scale up the performance of a certain function by attaching PCIe standard devices without any modification of software or hardware.

For example, if you need the processing power of four GPUs for machine learning, you can attach them from the resource pool of GPUs to a single server, and when the job is finished, you can release them back into the pool. With this flexible reconfiguration of the system, you can use a standard 1U server as a GPU host. The resource disaggregated system is a very cost-effective architecture to use GPUs in cloud data centers.

## [ExpEther Technology](https://openpowerfoundation.org/blogs/nec-acceleration-for-power/)

![nec rdcp 2](images/nec-rdcp-2-1024x414.png)

From the software view, Ethernet is transparent. Therefore the combination of the ExpEther engine chip and Ethernet is equivalent to a single hop PCIe standard switch, even if multiple Ethernet switches exist in the network. By adopting this distributed switch architecture, the system can extend the connection distance to a few kilometers and thousands of port counts. And it is still just a standard PCI Express switch, so the customer can reutilize vast assets of PCIe hardware and software without any modification.

By using ExpEther technology as a fabric for interconnects, a RD computing system can be built not only in rack scale but also multi-rack and data center scale without performance degradation because all the functions are implemented into a single hardware chip.

## POWER8 Server and ExpEther

We have made an experimental set up with Tyan POWERR8 Server, Habanero, and ExpEther. The 40G ExpEther HBA is mounted into the POWER8 Server, with a NVIDIA K80 GPU and a SSD in remote locations connected through a standard 40GbE Mellanox switch.

![nec rdcp 3](images/nec-rdcp-3-1024x619.png)

We measured the GPU performance by using CUDA N-Body. The figure below shows that we can get comparable performance with ExpEther to the K80 directly inserted in the PCIe slot inside the server. This is because the most of the simulation process has been executed in the GPUs without interaction to the host node and other GPUs. Of course, results may vary depending on the workload.

![nec rdcp 4](images/nec-rdcp-4-1024x590.png)

As for the remotely mounted SSD, we saw about 463K IOPS in FIO benchmark testing (Random 4KB Read). This IOPS performance value is almost the same as the local mounted SSD, meaning that there is no performance degradation in the SSD Read.

![nec rdcp 5](images/nec-rdcp-5.jpg)

![nec rdcp 6](images/nec-rdcp-6-1024x652.png)

Conclusion

- The Resource Disaggregated Platform expands the use of cloud data centers not only office applications but also high performance computing
- The Resource Disaggregated Platform computation by allocating devices from a resource pool at the device level to scale up individual performance and functionality.
- Since the fabric is ExpEther (Distributed PCIe Switch over Ethernet), open standard hardware and software can be utilized to build customers computer systems.
- A combination of the latest x8 PCIe Gen3 dual 40GbE ExpEther and POWER8 server shows potential for intensive computing power.

To learn more about the ExpEther Consortium, visit them at http://www.expether.org/index.html. To learn more about NECs ExpEther and OpenPOWER, go to https://openpowerfoundation.org/blogs/nec-acceleration-for-power/.

@ -1,45 +0,0 @@
---
title: "Combining Out-of-Band Monitoring with AI and Big Data for Datacenter Automation in OpenPOWER"
date: "2019-01-24"
categories:
- "blogs"
tags:
- "featured"
---

_Featuring OpenPOWER Academic Member: [The University of Bologna](https://www.unibo.it/en)_

By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM

OpenPOWER hosted its [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/), gathering academic members of the OpenPOWER community. These members were able to share their research and developments.

One of the presenters was Professor [Andrea Bartolini](https://www.unibo.it/sitoweb/a.bartolini/en) of The University of Bologna. The focus of his presentation was datacenter automation. Bartolini shared how this process can be implemented, examples of applications and future of work within the Power architecture.

Datacenter automation is an emerging trend that was developed to help with the increased complexity of supercomputers. To get this type of automation, heterogonous sensors are placed in an environment to collect and transmit data, which are then extracted and interpreted using big data and artificial intelligence. These technologies allow for anomaly detections, which can improve the overall learning and performance of datacenters. After information is interpreted, learned feedback is then sent back to sensors which optimizes the device.

Bartolini identifies a few specific usages that this automation process can be applied to:

- Verify and clarify node performance
- Detect security hazards
- Predictive maintenance

Bartolini then focused the rest of his presentation on sharing different applications, including:

- [D.A.V.I.D.E](https://www.e4company.com/en/?id=press&section=1&page=&new=davide_supercomputer), a supercomputer designed and developed by E4, was ranked in the [Top500](https://www.top500.org/system/179104). This system is used for measuring, monitoring and collecting data. D.A.V.I.D.E was designed in collaboration with Bartolini and the University of Bologna.
- Out-of-Band Monitoring: monitoring using nodes that allows for real-time frequency analysis on power supply.

Future works of this emerging practice of automating datacenters include:

- Extending the approach of in-house security and house-keeping tasks in datacenters
- Leveraging OpenBMC and custom firmware to deploy as part of BMC
- Applying process to larger Power9 systems

If youd like to learn more, Bartolinis full session and slides are below.

https://www.youtube.com/watch?v=bJ-R7SiFyho

 

<iframe style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 5px; max-width: 100%;" src="//www.slideshare.net/slideshow/embed_code/key/2FoaV2CommbJWg" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe>

**[Combining out - of - band monitoring with AI and big data for datacenter automation in OpenPOWER](//www.slideshare.net/ganesannarayanasamy/combining-out-of-band-monitoring-with-ai-and-big-data-for-datacenter-automation-in-openpower "Combining out - of - band monitoring with AI and big data for datacenter automation in OpenPOWER")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)**

@ -1,54 +0,0 @@
---
title: "Continuing the Datacenter Revolution"
date: "2016-01-05"
categories:
- "blogs"
tags:
- "featured"
- "ecosystem"
- "board-members"
- "blogs"
---

_By John Zannos and Calista Redmond_

![OPF logo](images/OPF-logo.jpg)Dear Fellow Innovators,

As the newly elected Chair and President of the OpenPOWER Foundation, we would like to take this opportunity to share our vision as we embark on a busy 2016.  Additionally, we want to make sure our fellow members -- all 175 of us and growing -- are aware of the many opportunities we have to contribute to our vibrant and growing organization.

## Our Vision

First, the vision.  Through an active group of leading technologists, OpenPOWER in its first two formative years built a strong technical foundation -- developing the literal bedrock of hardware and software building blocks required to enable end users to take advantage of POWER's open architecture.  With several jointly developed OpenPOWER-based servers already in market, a [growing network](http://developers.openpowerfoundation.org/) of physical and cloud-based test servers and a wide range of other [resources and tools](https://openpowerfoundation.org/technical/technical-resources/) now available to developers around the world, we have a strong technical base.  We are now moving into our next phase: scaling the OpenPOWER ecosystem.  How will we do this?  With an unwavering commitment to optimize as many workloads on the POWER architecture as possible.

It is in this vein that we have identified our top three priorities for 2016:

1. **Tackle system bottlenecks** through collaboration on memory bandwidth, acceleration, and interconnect advances.
2. **Grow workloads** **and software community** optimizing on OpenPOWER.
3. **Further OpenPOWERs validation through adoption** conveyed via member and end user testimonials, benchmarking, and industry influencer reports.

As employees of Canonical and IBM, and active participants in OpenPOWER activities stemming back to the early days, we share a deep commitment to open ecosystems as a driver for meaningful innovation.  Combining Canonical's leadership with growing software applications on the POWER architecture with IBM's base commitment to open development on top of the POWER architecture at all levels of the stack, we stand ready to help lead an even more rapid expansion of the OpenPOWER ecosystem in 2016.  This commitment, however, extends well beyond Canonical and IBM to across the entire [Board leadership](https://openpowerfoundation.org/about-us/board-of-directors/), which continues to reflect the diversity of our membership.  Two of the original founders of OpenPOWER -- our outgoing chair Gordon MacKean of Google and president Brad McCredie with IBM -- will remain close and serve as non-voting Board Advisors, providing guidance on a wide range of technical and strategic activities as needed. To read Gordon MacKean's perspective on OpenPOWER's growth, we encourage you to read his [personal Google+ post](https://plus.google.com/112847999124594649509/posts/PDcmTZzsHDg).

In driving OpenPOWERs vision forward, we are fortunate to have at our disposal not just our formal leadership team, but a deep bench of talent throughout the entire organization you literally dozens of the world's leading technologists representing all levels of the technology stack across the globe. With your support behind us as, we're sure the odds are stacked in our favor and we can't wait to get started.

## Get Involved

So, now that you've heard our vision for 2016, how can you get involved?

[![OpenPOWER_Summit2016_logo_950](images/OpenPOWER_Summit2016_logo_950.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/10/OpenPOWER_Summit2016_logo_950.jpg)

- **Make the most out of the 2016 OpenPOWER Summit** Register to attend, exhibit, submit a poster or present at this years North American OpenPOWER Summit in San Jose April 5-7. And, think about what OpenPOWER-related news you can reveal at the show.  We are expecting 200+ press and analysts to attend, so this an opportunity for Members to get some attention.  Be on the lookout for a “Call for News” email soon.  Click [here](https://openpowerfoundation.org/openpower-summit-2016/) to register and get more details.  Specific questions can be directed to the Summit Steering Committee at [opfs2016sg@openpowerfoundation.org](mailto:opfs2016sg@openpowerfoundation.org).
- **Contribute your technical expertise** Share your technical abilities and drive innovation with fellow technology industry leaders through any of the established [Technical Work Groups](https://openpowerfoundation.org/technical/working-groups/). Contact Technical Steering Committee Chair Jeff Brown at [jeffdb@us.ibm.com](mailto:jeffdb@us.ibm.com) to learn more or to join a work group.
- **Shape market perceptions** Share your marketing expertise and excitement for the OpenPOWER Foundation by joining the marketing committee. Email the marketing committee at [mktg@openpowerfoundation.org](mailto:mktg@openpowerfoundation.org) to join the committee or learn more.
- **Join the Academic Discussion Group** Participate in webinars, workshops, contests, and collaboration activities. Email Ganesan Narayanasamy at [ganesana@in.ibm.com](mailto:ganesana@in.ibm.com) to join the group or learn more.
- **Link up with geographic interests** European member organizer is Amanda Quartly at [mandie\_quartly@uk.ibm.com](mailto:mandie_quartly@uk.ibm.com). The Asia Pacific member organizer is Calista Redmond at [credmond@us.ibm.com](mailto:credmond@us.ibm.com)
- **Tap into technical resources** Use and build on the technical resources, cloud environments, and loaner systems available. Review what [technical resources and tools](https://openpowerfoundation.org/technical/technical-resources/) are now available and the [growing network](http://developers.openpowerfoundation.org/) of physical and cloud-based test servers available worldwide.
- **Engage OpenPOWER in industry events and forums** Contact Joni Sterlacci at [j.sterlacci@ieee.org](mailto:j.sterlacci@ieee.org) if you know of an event which may be appropriate for OpenPOWER to have an official presence.
- **Share your stories** Send your end-user success stories, benchmarks, and product announcements to OpenPOWER marketing committee member Greg Phillips at [gregphillips@us.ibm.com](mailto:gregphillips@us.ibm.com).
- **Write a blog** Submit a blog to be published on the [OpenPOWER Foundation blog](https://openpowerfoundation.org/newsevents/#category-blogs) detailing how you're innovating with OpenPOWER. Send details to OpenPOWER Foundation blog editor Sam Ponedal at [sponeda@us.ibm.com](mailto:sponeda@us.ibm.com).
- **Join the online discussion** Follow and join the OpenPOWER social conversations on [Twitter](https://twitter.com/openpowerorg), [Facebook](https://www.facebook.com/openpower), [LinkedIn](https://www.linkedin.com/groups/7460635) and [Google+](https://plus.google.com/117658335406766324024/posts).

And, finally, please do not hesitate to reach out to either of us personally to discuss anything OpenPOWER-related at any time.  Seriously.  Wed love to hear from you!

Yours in collaboration,

John Zannos                                                     Calista Redmond OpenPOWER Chair                                           OpenPOWER President [john.zannos@canonical.com](mailto:john.zannos@canonical.com)                              [credmond@us.ibm.com](mailto:credmond@us.ibm.com)

@ -1,32 +0,0 @@
---
title: "CreativeC Optimizes VASP on Power for Alloy Design"
date: "2018-11-29"
categories:
- "blogs"
tags:
- "featured"
---

\[caption id="attachment\_5954" align="alignleft" width="188"\][![](images/Greg_S_headshot.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/Greg_S_headshot.jpg) Greg Scantlen, CEO, CreativeC\[/caption\]

[The Vienna Ab initio Simulation Package](https://www.vasp.at/index.php/about-vasp/59-about-vasp) also known as VASP is a popular and powerful HPC application. It is one of the popular tools in atomistic materials modeling, such as electronic structure calculations and quantum-mechanical molecular dynamics.

It is developed at the University of Vienna in Austria for close to thirty years and contains roughly half-a-million lines of code. Currently, its used by more than 1,400 research groups in academia and industry worldwide and consistently ranks among the top 10 applications on national supercomputers.

But despite its significant impact on technology, there is one fundamental problem with VASP and similar programs: it does not scale very well. So instead of accelerating workloads, naively running VASP on more nodes can have the opposite effect. In fact, we observed that VASP actually runs _slower_ when operating on more than eight traditional nodes.

Since VASP doesnt scale well on traditional clusters, its a perfect fit for the OpenPOWER architecture. Because OpenPOWER has the highest compute density available in a single node, we applied for and received funding from a grant to run VASP to run quantum chemistry simulations on OpenPOWER.

Now, were running just as well or just a bit faster on a single OpenPOWER node as we were previously on eight x86 Linux-based compute nodes. More importantly, in the early phase of this project, we dont have to compete with rigid time limits and full queues of shared computing facilities. Instead of artificially adding break points and chopping the project into smaller parcels, we can explore larger model sizes and focus on the science.

The result is a more efficient use of computing resources reduced waiting time and an accelerated timeline for innovative, ground-breaking research.

One project we are pursuing with VASP seeks to improve hip and knee implants. Often, titanium alloy implants used in hip and knee implants are much stronger than bone, sometimes causing bone atrophy following an implant procedure. Our goal is to use VASP on OpenPOWER to identify an alloy that has properties more compatible to bone than traditional titanium alloy.

Improved hip and knee implants are only one advancement that could be made from running VASP on an OpenPOWER system and there are certainly others!

**[![](images/CreativeC-LOGO-300dpi-RGB-page-001-300x262.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/CreativeC-LOGO-300dpi-RGB-page-001.jpg)About CreativeC**

CreativeCs mission is to facilitate Science and Engineering by computing faster. CreativeCs discipline is work codesigned High Performance Computing (HPC). We team with expert software developers to offer specialized Instruments for Science and Engineering in the disciplines of Materials Science, Computational Chemistry, Molecular Dynamics, Deep Learning, Neural Networks, Drug Discovery, Biotechnology, and Bioinformatics. Our business model calls for diversification into areas of Science and Engineering made commercially viable by new compute technologies.

[http://creativecllc.com/](http://creativecllc.com/)

@ -1,32 +0,0 @@
---
title: "Crossing the Performance Chasm with OpenPOWER"
date: "2015-02-25"
categories:
- "blogs"
---

### Executive Summary

The increasing use of smart phones, sensors and social media is a reality across many industries today. It is not just where and how business is conducted that is changing, but the speed and scope of the business decision-making process is also transforming because of several emerging technologies Cloud, High Performance Computing (HPC), Analytics, Social and Mobile (CHASM).

High Performance Data Analytics (HPDA) is the fastest growing segment within HPC. Businesses are investing in HPDA to improve customer experience and loyalty, discover new revenue opportunities, detect fraud and breaches, optimize oil and gas exploration and production, improve patient outcomes, mitigate financial risks, and more. Likewise, HPDA helps governments respond faster to emergencies, analyze terrorist threats better and more accurately predict the weather all of which are vital for national security, public safety and the environment. The economic and social value of HPDA is immense.

But the sheer volume, velocity and variety of data is an obstacle to cross the Performance Chasm in almost every industry.  To meet this challenge, organizations must deploy a cost-effective, high-performance, reliable and agile IT infrastructure to deliver the best possible business outcomes. This is the goal of IBMs data-centric design of Power Systems and the OpenPOWER Foundation.

A key underlying belief driving the OpenPOWER Foundation is that focusing solely on microprocessors is insufficient to help organizations cross this Performance Chasm. System stack (processors, memory, storage, networking, file systems, systems management, application development environments, accelerators, workload optimization, etc.) innovations are required to improve performance and cost/performance. IBMs data-centric design minimizes data motion, enables compute capabilities across the system stack, provides a modular, scalable architecture and is optimized for HPDA.

Real world examples of innovations and performance enhancements resulting from IBMs data-centric design of Power Systems and the OpenPOWER Foundation are discussed. These span financial services, life sciences, oil and gas and other HPDA workloads. These examples highlight the urgent need for clients (and the industry) to evaluate HPC systems performance at the solution/workflow level rather than just on narrow synthetic point benchmarks such as LINPACK that have long dominated the industrys discussion.

Clients who invest in IBM Power Systems for HPC could lower the total cost of ownership (TCO) with fewer more reliable servers compared to x86 alternatives.  More importantly, these customers will also be able to cross the Performance Chasm leveraging high-value offerings delivered by the OpenPOWER Foundation for many real life HPC workloads.

### Speaker

_Sponsored by IBM_ **Srini Chari, Ph.D., MBA** [**chari@cabotpartners.com**](mailto:chari@cabotpartners.com)

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Chari-Srini_OPFS2015_IBMCabotPartners_031315_final.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Chari-Srini_OPFS2015_IBMCabotPartners_031315_final.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,26 +0,0 @@
---
title: "Data center and Cloud computing market landscape and challenges"
date: "2015-01-19"
categories:
- "blogs"
---

### Presentation Objective

In this talk, we will gain an understanding of Data center and Cloud computing market landscape and challenges, discuss technology challenges that limit scaling of cloud computing that is growing at an exponential pace and wrap up with insights into how FPGAs combined with general purpose processors are transforming next generation data centers with tremendous compute horsepower, low-latency and extreme power efficiency.

### Abstract

Data center workloads demand high computational capabilities, flexibility, power efficiency, and low cost. In the computing hierarchy, general purpose CPUs excel at Von Neumann (serial) processing, GPUs perform well on highly regular SIMD processing, whereas inherently parallel FPGAs excel on specialized workloads. Examples of specialized workloads: compute and network acceleration, video and data analytics, financial trading, storage, database and security.  High level programming languages such as OpenCL have created a common development environment for CPUs, GPUs and FPGAs. This has led to adoption of hybrid architectures and a Heterogeneous World. This talk showcases FPGA-based acceleration examples with CAPI attach through OpenPOWER collaboration and highlights performance, power and latency benefits.

### Speaker Bio

Manoj Roge is Director of Wired & Data Center Solutions Planning at Xilinx. Manoj is responsible for product/roadmap strategy and driving technology collaborations with partners. Manoj has spent 21 years in semiconductor industry with past 10 years in FPGA industry. He has been in various engineering and marketing/business development roles with increasing responsibilities. Manoj has been instrumental in driving broad innovative solutions through his participation in multiple standards bodies and consortiums. He holds an MBA from Santa Clara University, MSEE from University of Texas, Arlington and BSEE from University of Bombay.

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/RogeManoj_OPFS2015_Xilinx_031815.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/RogeManoj_OPFS2015_Xilinx_031815.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,28 +0,0 @@
---
title: "Data Centric Interactive Visualization of Very Large Data"
date: "2015-01-19"
categories:
- "blogs"
---

Speakers:  Bruce DAmora and Gordon Fossum Organization: IBM T.J. Watson Research, Data Centric Systems Group

### Abstract

The traditional workflow for high-performance computing simulation and analytics is to prepare the input data set, run a simulation, and visualize the results as a post-processing step. This process generally requires multiple computer systems designed for accelerating simulation and visualization. In the medical imaging and seismic domains, the data to be visualized typically comprise uniform three-dimensional arrays that can approach tens of petabytes. Transferring this data from one system to another can be daunting and in some cases may violate privacy, security, and export constraints.  Visually exploring these very large data sets consumes significant system resources and time that can be conserved if the simulation and visualization can reside on the same system to avoid time-consuming data transfer between systems. End-to-end workflow time can be reduced if the simulation and visualization can be performed simultaneously with a fast and efficient transfer of simulation output to visualization input.

Data centric visualization provides a platform architecture where the same high-performance server system can execute simulation, analytics and visualization.  We present a visualization framework for interactively exploring very large data sets using both isoparametric point extraction and direct volume-rendering techniques.  Our design and implementation leverages high performance IBM Power servers enabled with  NVIDIA GPU accelerators and flash-based high bandwidth low-latency memory. GPUs can accelerate generation and compression of two-dimensional images that can be transferred across a network to a range of devices including large display walls, workstation/PC, and smart devices. Users are able to remotely steer visualization, simulation, and analytics applications from a range of end-user devices including common smart devices such as phones and tablets. In this presentation, we discuss and demonstrate an early implementation and additional challenges for future work.

### Speaker Bios

**Bruce DAmora**, _IBM Research Division, Thomas J. Watson Research Center, P.O Box 218, Yorktown Heights, New York 10598 (_[_damora@us.ibm.com_](mailto:damora@us.ibm.com)_)_ . Mr. DAmora is a Senior Technical Staff Member in the Computational Sciences department in Data-centric Computing group.  He is currently focusing on frameworks to enable computational steering and visualization for high performance computing applications.  Previously, Mr. DAmora was the chief architect of Cell Broadband Engine-based platforms to accelerate applications used for creating digital animation and visual effects. He has been a lead developer on many projects ranging from applications to microprocessors and holds a number of hardware and software patents. He joined IBM Research in 2000 after serving as the Chief Software Architect for the IBM Graphics development group in Austin, Texas where he led the OpenGL development effort from 1991 to 2000. He holds Bachelors degrees in Microbiology and Applied Mathematics from the University of Colorado. He also holds a Masters degree in Computer Science from National Technological University.

**Gordon C. Fossum** _IBM Research Division, Thomas J. Watson Research Center, P.O. Box 218, Yorktown Heights, New York 10598 (_[_fossum@us.ibm.com_](mailto:fossum@us.ibm.com)_)._  Mr. Fossum is an Advisory Engineer in Computational Sciences at the Thomas J. Watson Research Center. He received a B.S. degree in Mathematics and Computer Science from the University of Illinois in 1978, an M.S. in Computer Science from the University of California, Berkeley in 1981, and attained "all but dissertation" status from the University of Texas in 1987.  He subsequently joined IBM Austin, where he has worked on computer graphics hardware development, Cell Broadband Engine development, and OpenCL development. He is an author or coauthor of 34 patents, has received a "high value patent" award from IBM and was named an IBM Master Inventor in 2005. In January 2014, he transferred into IBM Research, to help enable visualization of “big data” in a data-centric computing environment.

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/DAmora-Bruce_OPFS2015_IBM_031015_final.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/DAmora-Bruce_OPFS2015_IBM_031015_final.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,24 +0,0 @@
---
title: "DB2 BLU w/GPU Demo - Concurrent execution of an analytical workload on a POWER8 server with K40 GPUs"
date: "2015-02-25"
categories:
- "blogs"
---

### Abstract

In this technology preview demonstration, we will show the concurrent execution of an analytical workload on a POWER8 server with K40 GPUs. DB2 will detect both the presence of GPU cards in the server and the opportunity in queries to shift the processing of certain core operations to the GPU.  The required data will be copied into the GPU memory, the operation performed and the results sent back to the P8 processor for any remaining processing. The objective is to 1) reduce the elapsed time for the operation and 2) Make more CPU available to other SQL processing and increase overall system throughput by moving intensive CPU processing tasks to GPU

### Speaker names / Titles

Sina Meraji, PhD Hardware Acceleration Laboratory, SWG [Sinamera@ca.ibm.com](mailto:Sinamera@ca.ibm.com)

Berni Schiefer Technical Executive (aka DE) , Information Management Performance and Benchmarks DB2, BigInsights / Big SQL, BlueMix SQLDB / Analytics Warehouse  and Optim Data Studio [schiefer@ca.ibm.com](mailto:schiefer@ca.ibm.com)

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Meraji_OPFS2015_IBM_031715.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Meraji_OPFS2015_IBM_031715.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,42 +0,0 @@
---
title: "Deep Learning Goes to the Dogs"
date: "2016-11-10"
categories:
- "blogs"
tags:
- "featured"
---

_By Indrajit Poddar, Yu Bo Li, Qing Wang, Jun Song Wang, IBM_

These days you can see machine and deep learning applications in so many places. Get driven by a [driverless car](http://www.bloomberg.com/news/features/2016-08-18/uber-s-first-self-driving-fleet-arrives-in-pittsburgh-this-month-is06r7on). Check if your email is really conveying your sense of joy with the [IBM Watson Tone Analyzer](https://tone-analyzer-demo.mybluemix.net/), and [see IBM Watson beat the best Jeopardy player](https://www.youtube.com/watch?v=P0Obm0DBvwI) in the world in speed and accuracy. Facebook is even using image recognition tools to suggest tagging people in your photos; it knows who they are!

## Barking Up the Right Tree with the IBM S822LC for HPC

We wanted to see what it would take to get started building our very own deep learning application and host it in a cloud. We used the open source deep learning framework, [Caffe](http://caffe.berkeleyvision.org/), and example classification Jupyter notebooks from GitHub, like [classifying with ImageNet](http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb). We found several published trained models, e.g. GoogLeNet from the [Caffe model zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo). For a problem, we decided to use dog breed classification. That is, given a picture of a dog, can we automatically identify the breed? This is actually a [class project](http://cs231n.stanford.edu/) from Stanford University with student reports, such as [this one](http://cs231n.stanford.edu/reports/fcdh_FinalReport.pdf) from David Hsu.

We started from the [GoogLeNet model](https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet) and created our own model trained on the [Stanford Dogs Dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) using a system similar to the [IBM S822LC for HPC systems with NVIDIA Tesla P100 GPUs](https://blogs.nvidia.com/blog/2016/09/08/ibm-servers-nvlink/) connected to the CPU with NVIDIA NVLink. As David remarked in his report, without GPUs, it takes a very long time to train a deep learning model on even a small-sized dataset.

Using a previous generation IBM S822LC OpenPOWER system with a NVIDIA Tesla K80 GPU, we were able to train our model in only a few hours. The [IBM S822LC for HPC systems](http://www-03.ibm.com/systems/power/hardware/s822lc-hpc/) not only features the most powerful NVIDIA Tesla P100 GPUs, but also two IBM POWER8 processors interconnected with powerful [NVIDIA NVLink adapters](https://en.wikipedia.org/wiki/NVLink). These systems make data transfers between main memory and GPUs significantly faster compared to systems with PCIe interconnects.

## Doggy Docker for Deep Learning

We put [our Caffe model and our classification code](https://github.com/Junsong-Wang/pet-breed) written in Python into a web application inside a Docker container and deployed it with Apache Mesos and Marathon. Apache Mesos is an open source cluster management application with fine-grained resource scheduling features which now recognize [GPUs](http://www.nvidia.com/object/apache-mesos.html) as cluster-wide resources.

In addition to Apache Mesos, it is possible to run cluster managers, such as Kubernetes, Spectrum Conductor for Containers, and Docker GPU management components, such as [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) on OpenPOWER systems (see [presentation](http://www.slideshare.net/IndrajitPoddar/enabling-cognitive-workloads-on-the-cloud-gpus-with-mesos-docker-and-marathon-on-power)). In addition to Caffe, it is possible to run other [popular deep learning frameworks and tools](https://openpowerfoundation.org/blogs/deep-learning-options-on-openpower/) such as Torch, Theano, DIGITS and [TensorFlow](https://www.ibm.com/developerworks/community/blogs/fe313521-2e95-46f2-817d-44a4f27eba32/entry/Building_TensorFlow_on_OpenPOWER_Linux_Systems?lang=en) on OpenPOWER systems.

This [lab tutorial](http://www.slideshare.net/IndrajitPoddar/fast-scalable-easy-machine-learning-with-openpower-gpus-and-docker) walks through some simple sample use cases. In addition, some cool examples can be seen from the results of the recently concluded [OpenPOWER Developer Challenge](https://openpowerfoundation.org/blogs/openpower-developer-challenge-finalists/).

## This Dog Will Hunt

Our little GPU-accelerated pet breed classification micro-service is running in a Docker container and can be accessed at this [link](http://129.33.248.88:31001/) from a mobile device or laptop. See for yourself!

For example, given this image link from a Google search for "dog images", [https://www.petpremium.com/pets-pics/dog/german-shepherd.jpg](https://www.petpremium.com/pets-pics/dog/german-shepherd.jpg), we got this correct classification in 0.118 secs:

![German Shepard Deep Learning Dogs](images/dl-dogs-1.png)

You can also spin up your own GPU Docker container with deep learning libraries (e.g. Caffe) in the [NIMBIX cloud](https://platform.jarvice.com/landing) and train your own model and develop your own accelerated classification example.

![dl-dogs-2](images/dl-dogs-2.png)

Give it a try and share your screenshots in the comments section below!

@ -1,39 +0,0 @@
---
title: "Deep Learning Options on OpenPOWER Expand with New Distributions"
date: "2016-09-14"
categories:
- "blogs"
tags:
- "featured"
- "deep-learning"
- "machine-learning"
- "cognitive"
---

_By Michael Gschwind, Chief Engineer, Machine Learning and Deep Learning, IBM Systems_

![open key new 5](images/open-key-new-5.jpg)

I am pleased to announce a major update to the deep learning frameworks available for OpenPOWER as software “distros” (distributions) that are as easily installable as ever using the Ubuntu system installer.

## Significant updates to Key Deep Learning Frameworks on OpenPOWER

Building on the great response to our first release of the Deep Learning Frameworks, we have made significant updates by refreshing all the available frameworks now available on OpenPOWER as pre-built binaries optimized for GPU acceleration:

- [**Caffe**](http://caffe.berkeleyvision.org/), a dedicated artificial neural network (ANN) training environment developed by the Berkeley Vision and Learning Center at the University of California at Berkeley is now available in two versions: the leading edge Caffe development version from UCBs BVLC, and a Caffe version tuned Nvidia to offer even more scalability using GPUs.
- [**Torch**](http://torch.ch/), a framework consisting of several ANN modules built on an extensible mathematics library
- [**Theano**](http://deeplearning.net/software/theano/), another framework consisting of several ANN modules built on an extensible mathematics library

The updated Deep Learning software distribution also includes [**DIGITS**](http://deeplearning.net/software/theano/)**,** a graphical user interface to make users immediately productive at using the Caffe and Torch deep learning frameworks.

As always, weve ensured that these environments may be built from the source repository for those who prefer to compile their own binaries.

## New Distribution, New Levels of Performance

The new distribution includes major performance enhancements in all key  areas:

- **The** **OpenBLAS** linear algebra library includes enhancement to take full advantage of the [POWER vector-scalar instruction set](https://www.researchgate.net/publication/299472451_Workload_acceleration_with_the_IBM_POWER_vector-scalar_architecture) offering a manifold speedup to processing on POWER CPUs.
- **The Mathematical Acceleration Subsystem (****MASS****) for Linux** high-performance mathematical libraries are made available in freely distributable form and free of charge to accelerate cognitive and other Linux applications by exploiting the latest advances in mathematical algorithm optimization and advanced POWER processor features and in particular the [POWER vector-scalar instruction set](https://www.researchgate.net/publication/299472451_Workload_acceleration_with_the_IBM_POWER_vector-scalar_architecture).
- **cuDNN** v5.1 enables Linux on Power cognitive applications to take full advantage of the latest GPU processing features and the newest GPU accelerators.

## [To get started with or upgrade to the latest version of the MLDL frameworks, download the installation instructions](http://ibm.co/1YpWn5h).

@ -1,25 +0,0 @@
---
title: "Department of Energy Awards $425 Million for Next Generation Supercomputing Technologies"
date: "2014-11-20"
categories:
- "press-releases"
- "blogs"
tags:
- "department-of-energy"
- "coral"
- "supercomputer"
---

WASHINGTON — U.S. Secretary of Energy Ernest Moniz today announced two new High Performance Computing (HPC) awards to put the nation on a fast-track to next generation exascale computing, which will help to advance U.S. leadership in scientific research and promote Americas economic and national security.

Secretary Moniz announced $325 million to build two state-of-the-art supercomputers at the Department of Energys Oak Ridge and Lawrence Livermore National Laboratories.  The joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) was established in early 2014 to leverage supercomputing investments, streamline procurement processes and reduce costs to develop supercomputers that will be five to seven times more powerful when fully deployed than todays fastest systems in the U.S. In addition, Secretary Moniz also announced approximately $100 million to further develop extreme scale supercomputing technologies as part of a research and development program titled FastForward 2.

“High-performance computing is an essential component of the science and technology portfolio required to maintain U.S. competitiveness and ensure our economic and national security,” Secretary Moniz said. “DOE and its National Labs have always been at the forefront of HPC and we expect that critical supercomputing investments like CORAL and FastForward 2 will again lead to transformational advancements in basic science, national defense, environmental and energy research that rely on simulations of complex physical systems and analysis of massive amounts of data.”

Both CORAL awards leverage the IBM Power Architecture, NVIDIAs Volta GPU and Mellanoxs Interconnected technologies to advance key research initiatives for national nuclear deterrence, technology advancement and scientific discovery. Oak Ridge National Laboratorys (ORNLs) new system, Summit, is expected to provide at least five times the performance of ORNLs current leadership system, Titan. Lawrence Livermore National Laboratorys (LLNLs) new supercomputer, Sierra, is expected to be at least seven times more powerful than LLNLs current machine, Sequoia. Argonne National Laboratory will announce its CORAL award at a later time.

The second announcement today, FastForward 2, seeks to develop critical technologies needed to deliver next-generation capabilities that will enable affordable and energy-efficient advanced extreme scale computing research and development for the next decade.  The joint project between DOE Office of Science and National Nuclear Security Administration (NNSA) will be led by computing industry leaders AMD, Cray, IBM, Intel and NVIDIA.

In an era of increasing global competition in high-performance computing, advancing the Department of Energys computing capabilities is key to sustaining the innovation edge in science and technology that underpins U.S. national and economic security while driving down the energy and costs of computing. The overall goal of both CORAL and FastForward 2 is to establish the foundation for the development of exascale computing systems that would be 20-40 times faster than todays leading supercomputers.

For more information on CORAL, please click on the following fact sheet [HERE](http://www.energy.gov/downloads/fact-sheet-collaboration-oak-ridge-argonne-and-livermore-coral).

@ -1,89 +0,0 @@
---
title: "Deploying POWER8 Virtual Machines in OVH Public Cloud"
date: "2015-02-24"
categories:
- "blogs"
---

_By Carol B. Hernandez, Sr. Technical Staff Member, Power Systems Design_

Deploying POWER8 virtual machines for your projects is straightforward and fast using OVH POWER8 cloud services. POWER8 virtual machines are available in two flavors in OVHs RunAbove cloud: [http://labs.runabove.com/power8/](http://labs.runabove.com/power8/).

[![image1](images/image1-300x272.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image1.png) ![image2](images/image2-300x300.png)

 

POWER8 compute is offered in RunAbove as a “Lab”. [Labs](http://labs.runabove.com/index.xml) provide access to the latest technologies in the cloud and are not subject to Service Level Agreements (SLA). I signed up for the POWER8 lab and decided to share my experience and findings.

To get started, you have to open a RunAbove account and sign up for the POWER8 Lab at: [https://cloud.runabove.com/signup/?launch=power8](https://cloud.runabove.com/signup/?launch=power8).

When you open a RunAbove account you link the account to a form of payment, credit card or pay pal account. I had trouble using the credit card path but was able to link to a pay pal account successfully.

After successfully signing up for the POWER8 lab, you are taken to the RunAbove home page which defaults to “Add an Instance”.

[](https://openpowerfoundation.org/wp-content/uploads/2015/02/6.png)

[![image4](images/image4.jpeg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image4.jpeg)

The process to create a POWER8 instance (aka virtual machine) is straightforward. You select the data center (North America BHS-1), the “instance flavor” (Power 8S), and the instance image (Ubuntu 14.04).

[![image5](images/image5.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image5.png)

Then, you select the ssh key to access the virtual machine. The first time I created an instance, I had to add my ssh key. After that, I just had to select among the available ssh keys.

The last step is to enter the instance name and you are ready to “fire up”. The IBM POWER8 S flavor gives you a POWER8 virtual machine with 8 virtual processors, 4 GB of RAM, and 10 GB of object storage. The virtual machine is connected to the default external network. The Ubuntu 14.04 image is preloaded in the virtual machine.

After a couple of minutes, you get the IP address and can ssh to your POWER8 virtual machine.

[![image6](images/image6.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image6.jpg) [![image13](images/image13.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image13.png)

 

You can log in to your POWER8 virtual machine and upgrade the Linux image to the latest release available, using the appropriate Linux distribution commands. I was able to successfully upgrade to Ubuntu 14.10.

The default RunAbove interface (Simple Mode) provides access to a limited set of tasks, e.g. add and remove instances, SSH keys, and object storage. The OpenStack Horizon interface, accessed through the drop down menu under the user name, provides access to an extended set of tasks and options.

[![image8](images/image8.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image8.png)

Some of the capabilities available through the OpenStack Horizon interface are:

**Create snapshots.** This function is very helpful to capture custom images that can be used later on to create other virtual machines. I created a snapshot of the POWER8 virtual machine after upgrading the Linux image to Ubuntu 14.10.

[![image9](images/image9.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image9.png)

**Manage project images.** You can add images to your project by creating snapshots of your virtual machines or importing an image using the Create Image task. The figure below shows a couple of snapshots of POWER8 virtual machines after the images were customized by upgrading to Ubuntu 10.14 or adding various packages for development purposes.

[![image10](images/image10.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image10.png)

**Add private network connections.** You can create a local network and connect your virtual machines to it when you create an instance.

[![image11](images/image11.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image11.png)

**Create instance from snapshot.** The launch instance task, provided in the OpenStack Horizon interface, allows you to create a virtual machine using a snapshot from the project image library. In this example, the snapshot of a virtual machine that was upgraded to Ubuntu 14.10 was selected.

[![image12](images/image12.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image12.png)

[![image7](images/image7.jpeg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image7.jpeg)

**Customize instance configuration.** The launch instance task also allows you to add the virtual machine to a private network and specify post-deployment customization scripts, e.g. OpenStack user-data.

[![image14](images/image14.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image14.jpg)

All of these capabilities are also available through OpenStack APIs. The figure below lists all the supported OpenStack services.

[![image15](images/image15.png)](https://openpowerfoundation.org/wp-content/uploads/2015/02/image15.png)

Billing is based on created instances. The hourly rate ($0.05/hr) is charged even if the instance is inactive or you never log in to instance. There is also a small charge for storing custom images or snapshots.

To summarize, you can quickly provision a POWER8 environment to meet your project needs using OVH RunAbove interfaces as follows:

- Use “Add Instance” to create a POWER8 virtual machine. Customize the Linux image with the desired development environment / packages or workloads
- Upgrade to desired OS level
- Install any applications, packages or files needed to support your project
- Create a snapshot of the POWER8 virtual machine with custom image
- Use “Launch Instance” to create a POWER8 virtual machine using the snapshot of your custom image
- For quick and consistent deployment of desired environment on multiple POWER8 virtual machines
- Delete and re-deploy POWER8 virtual machines as needed to meet your project demands
- Use OpenStack APIs to automate deployment of POWER8 Virtual Machines

For more information about the OVH POWER8 cloud services and to sign up for the POWER8 lab visit: [http://labs.runabove.com/power8/](http://labs.runabove.com/power8/).

@ -1,37 +0,0 @@
---
title: "OpenPOWER Foundation Accelerates Developer Adoption at OpenPOWER Summit Europe"
date: "2018-10-03"
categories:
- "blogs"
tags:
- "featured"
---

[![](images/Summit-Europe-Banner-e1538569223974.jpg)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/10/Summit-Europe-Banner-e1538569223974-1.jpg)More than 250 industry leaders and OpenPOWER Foundation members registered and are convening today at the OpenPOWER Summit Europe 2018 in Amsterdam. The two-day, developer-centric event themed “Open the Future” includes sessions on technologies like PCIe Gen4, CAPI, OpenCAPI, Linux, FPGA, Power Architecture and more.

Front and center at OpenPOWER Summit Europe is the Talos II developer workstation by Raptor Computing Systems. The first POWER9 developer workstation, the Talos II will enable more developers to begin working on Power technology due to its affordable price point.

Artem Ikoev, c-founder and CTO of Yadro, one of the OpenPOWER Foundations newest Platinum members, will also speak at OpenPOWER Summit Europe. According to Ikoev, “The openness of the OpenPOWER Foundation enables collaboration among industry leaders as well as emerging vendors, resulting in pioneering products.”

“European interest in OpenPOWER has grown consistently and now comprises close to 25 percent of our membership,” said Hugh Blemings, executive director, OpenPOWER Foundation. “Computing infrastructure, artificial intelligence, security and analytics are all areas where our European members are bringing innovative solutions to the forefront.”

## OpenPOWER Summit Europe Hackathons

OpenPOWER Summit Europe attendees will have a chance to participate in two hands-on hackathons.

The OpenBMC hackathon will provide participants with a complete understanding of the fundamentals of OpenBMC including development, build environment and service management. Planned exercises will cover kernel updates, initial application development, web user interface customization and support system integration.

The AI4Good hackathon empowers participants to use their coding skills to help others. Teams will compete to build predictive machine learning and deep learning models to help detect the risk of lung tumors.

## OpenPOWER Growth in Europe

Representatives from a number of OpenPOWER Foundation member organizations attended Summit Europe 2018 to share how theyre using Power to Open the Future. Highlights include:

- To assist with the monumental task of collecting data generated by their large hadron collider (LHC), **CERN** is evaluating POWER9-based OpenPOWER systems to capture the 5 terabytes of data generated each second by the LHC. POWER9s industry-leading IO features can help drive differentiated performance for the FPGA cards that CERN uses to capture the data.
- Based on blockchain technology and decentralized networks with democratic oversight, **Vereign** adds integrity, authenticity and privacy to identity, data and collaboration. “Such federated networks of user-controlled clouds require performance, transparency and the ability to add strong hardware-based cryptography. OpenPOWER is the only platform that gives us all three in combination with a vibrant ecosystem of further innovation to further improve our solution, said Georg Greve, co-founder and president, Vereign AG.
- **Brytlyt** works with companies to solve the challenge of analyzing billions of rows of data at “the speed of thought” by indexing, joining and aggregating data with its GPU database.
- Leveraging OpenPOWER enabled **E4** to build and integrate a chain of components that enable its [A.V.I.D.E supercomputer](https://www.e4company.com/en/?id=press&section=1&page=&new=davide_supercomputer) to achieve increased energy efficiency.
- **Inspur Power Systems** strives to build a new generation of OpenPOWER server products for data center servers facing the “cloud intelligence” era. The company has released three OpenPOWER servers this year including its Enterprise General Platform, Commercial Computing Platform and Facing HPC and AI Platform.
- **Delft University** is working to create next generation OpenPOWER computing systems to achieve the best performance for the target application. In collaboration with IBM, the organization is working to accelerate DNA analysis on FPGAs using CAPI with a goal of creating an end-to-end DNA analysis solution that is easily scalable and delivers high speed.

As organizations collaborate on new solutions and more developers begin to build on Power, the OpenPOWER Foundation expects continued growth in Europe and around the world. For real time updates from the event, check out [#OpenPOWERSummit](https://twitter.com/hashtag/OpenPOWERSummit?src=hash) on Twitter.

@ -1,30 +0,0 @@
---
title: "New OpenPOWER Member DRC Computing Discusses FPGAs at IBM Interconnect"
date: "2016-02-22"
categories:
- "blogs"
tags:
- "featured"
---

_By Roy Graham, President and COO, DRC Computer Corp._

New business models bring new opportunities, and my relationship with IBM is proof-positive of that fact. Although I respected them, in the previous way of doing business they were the competition, and it was us or them. Wow, has that changed! In the last year working with IBM I see a very new company and the OpenPOWER organization as a real embodiment of a company wanting to partner and foster complementary technologies.

![DRC](images/DRC.png)

DRC Computer (DRC) builds highly accelerated, low latency applications using FPGAs (Field Programmable Gate Arrays). These chips offer massive parallelism at very low power consumption. By building applications that exploit this parallelism we can achieve acceleration factors of 30 to 100+ times the equivalent software version. We have built many diverse applications in biometrics, DNA familial search, data security, petascale indexing, and others. At Interconnect 2016 Ill be highlighting two applications massive graph network analytics and fuzzy logic based text/data analysis. More details on some of the DRC applications can be found at [here](http://drccomputer.com/solutions.html).

https://www.youtube.com/watch?v=DZZuur8LXOY

We are working closely with the CAPI group at IBM to integrate the DRC FPGA-based solutions into Power systems. One of the early results of this cooperation was a demonstration of the DRC graph network analytics at SC15 running on a [POWER8 system using a Xilinx FPGA](https://openpowerfoundation.org/blogs/accelerating-key-value-stores-kvs-with-fpgas-and-openpower/).

OpenPOWER provides DRC with a large and rapidly expanding ecosystem that can help us build better solutions faster and offer partnerships that will vastly expand our market reach. The benefit for our customers will be a more fully integrated solution and improved application economics. In **[Session 6395 on Feb 23rd at 4:00pm PT](http://ibm.co/1QcEiUz)** I will be presenting this work with FPGAs at [IBMs InterConnect Conference](http://ibm.co/1KsWIzQ) in Las Vegas as part of a four-person panel discussing OpenPOWER.

In the session, Ill cover the DRC graph networking analytics and fuzzy logic based text/data analysis. The graph networking system implements Dijkstra and Betweenness Centrality algorithms to discover and rank relationships between millions of people, places, events, objects, and more. This achieves in excess of 100x acceleration compared to a software-only version. As a least-cost path and centrality analysis, it has broad applicability in many areas including social networks analysis, distribution route planning, aircraft design, epidemiology, stock trading, etc. The fuzzy logic based text/data analytics was designed for social media analysis, and captures common social media misspellings, shorthand, and mixed language usage. The DRC product is tolerant of these and enables an analyst to do a score based approximate match on phrases or words they are searching for. We can search on hundreds of strings simultaneously on one FPGA achieving acceleration factors of 100x software applications.

OpenPOWER is opening up whole new uses for FPGAs, and through the collaborative ecosystem, the greatest minds in the industry are working on unlocking the power of accelerators. In an era where performance of systems come not just from the chip, but across the entire system stack, OpenPOWER's new business model is the key to driving innovation and transforming businesses. Please join me at **[session 6395 on Feb 23rd at 4:00pm PT](http://ibm.co/1QcEiUz)**, and I look forward to collaborating with you and our fellow members in the OpenPOWER ecosystem.

* * *

_Roy Graham is the President and COO of DRC Computing Corp. and builds profitable revenue streams for emerging technologies including data analytics, communications, servers, human identification systems and hybrid applications. At Digital and Tandem Roy ran Product Management groups delivering > $10B in new revenue. Then he was SVP S&M at Wyse ($250M turnaround), and at Be (IPO) and CEO at 2 early stage web-based companies._

@ -1,32 +0,0 @@
---
title: "E4 Computer Engineering Showcases Full Line of OpenPOWER Hardware at International Supercomputing"
date: "2016-06-20"
categories:
- "blogs"
tags:
- "featured"
---

_By Ludovica Delpiano, E4 Computing_

E4s mission, to drive innovation by implementing and integrating cutting-edge solutions with the best performance for every high-end computing and storage requirement, is very much our focus for this years edition of ISC. We chose to [showcase](http://cms-it.e4company.com/media/35466/e4pr-accelerated-openpower-system-by-e4-computer-engineering-showcased-a.pdf) a number of systems at our booth, #914, based on one of the most advanced technologies available at the moment: accelerated POWER8 technology.

## Showcasing OpenPOWER Servers

\[caption id="attachment\_3933" align="alignleft" width="169"\]![E4 Computer Engineering at ISC](images/20160620_154652-1-169x300.jpg) E4 Computer Engineering at ISC\[/caption\]

E4s solutions at ISC16 represents a consistent alternative to standard x86 technology by providing the scientific and industrial researchers with fast performances for their complex processing applications.

Our newest system, OP205 is our most advanced POWER8-based server designed for high performance computing and big data. It includes coherent accelerator processor interface (CAPI) enabled PCIe slots, and can host two NVIDIA K80 GPUs. Both technologies are designed to accelerate application performance with the POWER8 CPU.

## Building Faster Servers with NVLink

In addition, the OP Series is powered by the [NVIDIA Tesla Accelerated Computing Platform](http://www.nvidia.com/object/why-choose-tesla.html) and two out of the three solutions on display at our booth utilize the new [NVIDIA Tesla P100 GPU accelerators](http://www.nvidia.com/object/tesla-p100.html) with the high-bandwidth NVIDIA NVLink™ interconnect technology, which dramatically speeds up throughput and maximizes application performance.

We are confident that the series can be a perfect match for complex workloads like in Oil & Gas, Finance, Big Data and all compute-intensive applications.

We look forward to meet anyone attending the Conference who is interested in starting to get familiar with OpenPOWER. To do so, you just need to pop by booth #914 and our team will talk you through the various options.

We see ISC as a perfect venue to launch this technology with the opportunity to talk to the people who may  benefit from it, to find out from them theapplications and codes that are most needed.

## To learn more, visit us at [www.e4company.com](http://www.e4company.com).

@ -1,30 +0,0 @@
---
title: "Early Application Experiences on Summit at Oak Ridge National Laboratory"
date: "2018-12-18"
categories:
- "blogs"
tags:
- "featured"
---

By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM

We recently held the [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/) at the Nimbix headquarters in Dallas, Texas. Having taken place just before SC18, this event allowed members of the Academia Discussion Group and other developers using OpenPOWER platforms to exchange results and enhance their technical knowledge and skills.

One of the most interesting sessions was led by [Dr. Wayne Joubert](https://www.olcf.ornl.gov/directory/staff-member/wayne-joubert/), computational scientist in the Scientific Computing Group at the National Center for Computational Sciences at Oak Ridge National Laboratory (ORNL). Dr. Joubert shared insight into early application experiences on [Summit](https://www.olcf.ornl.gov/summit/), [the most powerful supercomputer in the world](https://www.top500.org/news/us-regains-top500-crown-with-summit-supercomputer-sierra-grabs-number-three-spot/).

A number of teams have already started working on Summit in a variety of fields for various applications:

- **Center for Accelerated Application Readiness (CAAR)** [this group at ORNL](https://www.olcf.ornl.gov/caar/) is responsible for bringing applications forward to get them ready for next generation systems. So far, 13 CAAR teams have been involved from domains including astrophysics, chemistry, engineering and more. These were the first teams to get access to the first 1,080 Summit nodes (at present, 4,608 nodes are available).
- **Summit Early Science Program** ORNL received 65 letters of intent and 47 full proposals for its [Summit Early Science Program](https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/summit-early-science-program-call-for-proposals/). Notably, about 20 percent of these included a machine learning component a remarkable increase in interest for deep learning applications.
- **ACM Gordon Bell Prize** The Gordon Bell Prize is awarded each year to recognize outstanding achievement in high-performance computing. Five finalist teams used Summit this year including [both winning teams](https://www.hpcwire.com/off-the-wire/acm-awards-2018-gordon-bell-prize-to-two-teams-for-work-combating-opioid-addiction-understanding-climate-change/) “Attacking the Opioid Epidemic: Determining the Epistatic and Pleiotropic Genetic Architectures for Chronic Pain and Opioid Addiction” and “Exascale Deep Learning for Climate Analytics.”

Overall, Dr. Joubert shared that, “Summit is a very, very powerful system. Users are already using it effective and were really excited about it.”

View Dr. Jouberts full session video and slides below.

<iframe src="https://www.youtube.com/embed/NlMN9G04BxM" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

<iframe style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 5px; max-width: 100%;" src="//www.slideshare.net/slideshow/embed_code/key/5lUFuM6qfGs01B" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe>

**[Early Application experiences on Summit](//www.slideshare.net/ganesannarayanasamy/early-application-experiences-on-summit "Early Application experiences on Summit ")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)**

@ -1,28 +0,0 @@
---
title: "eASIC Brings Advanced FPGA Technology to OpenPOWER"
date: "2016-05-19"
categories:
- "blogs"
tags:
- "featured"
---

_By Anil Godbole, Senior Marketing Manager, eASIC Corp._

![easic logo](images/easic-logo.png) [eASIC](http://www.easic.com) is very excited to join the OpenPOWER Foundation. One of the biggest value propositions of the [eASIC Platform](http://www.easic.com/products/) is to offer an FPGA design flow combined with ASIC-like performance and up to 80% lower power consumption. This allows the community to enable custom designed co-processor and accelerator solutions in datacenter applications such as searching, pattern-matching, signal and image processing, data analytics, video/image recognition, etc.

## **Need for Power-efficient CPU Accelerators**

The advent of multi-core CPUs/GPUs has helped to increase the performance of modern datacenters. However, this performance is being limited by a non-proportional increase in energy consumption. As workloads like Big Data analytics and Deep Neural Networks continue to evolve in size, there is a need for new computing paradigm which will continue scaling compute performance while keeping power consumption low.

A key technique is to exploit parallelism during program execution. While multi-core processors can also execute in parallel, they burn a lot of energy when sharing data/messages between processors. That is because such data typically resides in off-chip RAMs and their accesses are very power hungry.

## **eASIC Platform**

The eASIC Platform uses distributed logic blocks with associated local memories which enable highly parallel and power efficient implementations of the most complex algorithms. With up to twice the performance of FPGAs and up to 80% lower power consumption the eASIC Platform can provide a highly efficient performance per watt for the most demanding algorithm.  The vast amount of storage provided by the local memories allows fast message and data transfers between the compute elements reducing latency and without incurring the power penalty of accessing off-chip RAM.

## **CAPI Enhancements**

CAPI defines a communication protocol for command/data transfers between the main processor and the accelerator device based on shared, coherent memory. Compared to traditional I/O- based protocols, CAPIs approach precludes the need for O/S calls thereby significantly reducing the latency of program execution.

Combining the benefits of eASIC Platform and CAPI protocol can lead to high performance and power-efficient Co-processor/Accelerator solutions. For more details on the eASIC Platform please feel free to contact us [www.easic.com](http://www.easic.com) or follow us on Twitter [@eASIC](https://twitter.com/easic).

@ -1,9 +0,0 @@
---
title: "eASIC Joins the OpenPOWER Foundation to Offer Custom-designed Accelerator Chips"
date: "2016-05-04"
categories:
- "press-releases"
- "blogs"
---


@ -1,28 +0,0 @@
---
title: "Expanding Ecuadors Supercomputing Future with Yachay and OpenPOWER"
date: "2016-11-22"
categories:
- "blogs"
---

_By Alejandra Gando, Director of Communications, Yachay EP_

![ibm_yachay1](images/IBM_Yachay1.jpg)

The pursuit of supercomputing represents a major step forward for Ecuador and Yachay EP, with IBM and OpenPOWER, is leading the way.

[Yachay](http://www.yachay.gob.ec/yachay-ep-e-ibm-consolidan-acciones-de-alto-desarrollo-tecnologico-para-el-pais/?cm_mc_uid=18522278184214774002079&cm_mc_sid_50200000=1479849333), a planned city for technological innovation and knowledge intensive businesses combining the best ideas, human talent and state-of-the-art infrastructure, is tasked with creating the worldwide scientific applications necessary to achieve Good Living (Buen Vivir). In its constant quest to push Ecuador towards a knowledge-based economy, Yachay found a partner in OpenPOWER member IBM to create a source of information and research on issues such as oil, climate and food genomics.

Yachay will benefit from state of the art technology, IBMs new OpenPOWER LC servers infused with innovations developed by the OpenPOWER community, in the search and improvement of production of non-traditional exports based on the rich biodiversity of Ecuador. It will be able to use genomic research to improve the quality of products and become more competitive in the global market. Genomic research revolutionizes both the food industry and medicine. So far the local genomic field had slowly advanced by the amount of data, creating an obstacle to the investigation.

"For IBM it is of great importance to provide an innovative solution for the country, the region and the world, in order to provide research and allow Ecuador to pioneer in areas such as genomics, environment and new sources of economic growth" says Patricio Espinosa, General Manager, IBM Ecuador.

Installed in an infrastructure based on the IBM POWER8 servers and storage solutions with software implementation capacity of advanced analytics and cognitive computing infrastructure, this system acquired by Yachay EP enables the use of HPC real-time applications with large data volumes to expand capabilities of scientists to make quantitative predictions. IBM systems use a data-centric approach, integrating and linking data to predictive simulation techniques that expand the limits of scientific knowledge.

The new supercomputing project will allow Yachay to foster projects with a higher technology component, to create simulations and to do projects with the capacity of impacting the way science is done in the country.

Héctor Rodríguez, General Manager of the Public Company Yachay, noted with pride the consolidation of an increasingly strong ecosystem for innovation, entrepreneurship and technological development in Ecuador.

Once the supercomputer is in place the researchers at Yachay will be able to work in projects that require supercomputing enabling better and faster results. By using the power of high performance computing in these analyzes it enables different organizations or companies to maximize their operations and minimize latency of their systems, allowing them to obtain further findings in their research.

Want to learn more? Visit [www.ciudadyachay.com](http://www.ciudadyachay.com) (available in English and Spanish) follow us on Twitter at @CiudadYachay.

@ -1,24 +0,0 @@
---
title: "Get Ready for OpenPOWER: A Technical Training Session with E&ICT Academy in India"
date: "2019-02-15"
categories:
- "blogs"
tags:
- "featured"
---

By Ganesan Narayanasamy

![OpenPOWER and Data Analytics](images/EICT-1024x575.png)

Professor [R.B.V Subramaanyam, Ph.D.,](https://www.nitw.ac.in/faculty/id/16341/) a computer science professor at the National Institute of Technology, Warangal India, recently organized a six-day faculty development program as part of the [Electronics & ICT Academy](http://eict.iitg.ac.in/). More than 40 faculty members and researchers in Southern India participated in the workshop.

One full day of the program was dedicated to learning about OpenPOWER. I was happy to take the opportunity to deliver technical sessions on Spark, Spark ML and Internals along with my colleague and IBM Technical lead [Josiah Samuel](https://www.linkedin.com/in/josiahsams/?originalSubdomain=in).

Josiah covered a Spark overview, Spark SQL, Spark Internals and Spark ML. He conveyed IBMs involvement in these open source technologies, and discussed features of Power Systems capabilities in artificial intelligence and high-powered computing. One key differentiator focused on was incorporating Nvidia GPUs into Power servers along with NVLink connections.

We shared materials and code with the faculty and researchers after the interactive session, so they can continue to develop their knowledge and skills. Rich technology training sessions like this one offer the opportunity for faculties to learn more about the OpenPOWER stack!

<iframe style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 5px; max-width: 100%;" src="//www.slideshare.net/slideshow/embed_code/key/4vHc7s504hiuzx" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe>

**[Power Software Development with Apache Spark](//www.slideshare.net/OpenPOWERorg/power-software-development-with-apache-spark "Power Software Development with Apache Spark")** from **[OpenPOWERorg](https://www.slideshare.net/OpenPOWERorg)**

@ -1,30 +0,0 @@
---
title: "Enabling Coherent FPGA Acceleration"
date: "2015-01-16"
categories:
- "blogs"
---

**Speaker:** [Allan Cantle](https://www.linkedin.com/profile/view?id=1004910&authType=NAME_SEARCH&authToken=ckHg&locale=en_US&srchid=32272301421438603123&srchindex=1&srchtotal=1&trk=vsrp_people_res_name&trkInfo=VSRPsearchId%3A32272301421438603123%2CVSRPtargetId%3A1004910%2CVSRPcmpt%3Aprimary) President & Founder, Nallatech **Speaker Organization:** ISI / Nallatech

### Presentation Objective

To introduce the audience to IBMs Coherent Attached Processor Interface, CAPI, Hardware Development Kit, HDK, that is provided by Nallatech and provide an overview of FPGA Acceleration.

### Abstract

Heterogeneous Computing and the use of accelerators is becoming a generally accepted method of delivering efficient application acceleration. However, to date, there has been a lack of coordinated efforts to establish open industry standard methods for attaching and communicating between host processors and the various accelerators that are available today. With IBMs OpenPOWER Foundation initiative, we now have the opportunity to effectively address this issue and dramatically improve the use and adoption of Accelerators.

The presentation will introduce CAPI, Coherent Accelerator Processor Interface, to the audience and will detail the CAPI HDK, Hardware Development Kit, implementation that is offered to OpenPOWER customers through Nallatech. Several high level examples will be presented that show where FPGA acceleration brings significant performance gains and how these can often be further advantaged by the Coherent CAPI interface. Programming methodologies of the accelerator will also be explored where customers can either leverage pre-compiled accelerated libraries that run on the accelerator or where they can write their own Accelerated functions in OpenCL.

### Speaker Bio

Allan is the founder of Nallatech, established in 1993, that specializes in compute acceleration using FPGAs. As CEO, Allan focused Nallatech on helping customers port critical codes to Nallatechs range of FPGA accelerators and pioneered several early tools that increased porting productivity. His prior background, with BAE Systems, was heavily involved in architecting Real Time, Heterogeneous Computers that tested live weapon systems and contained many parallel processors including Microprocessors, DSPs and FPGAs. Allan holds a 1st Class Honors EE BEng Degree from Plymouth University and a MSC in Corporate Leadership from Napier University.

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Cantle_OPFS2015_Nallatech_031315_final.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Cantle_OPFS2015_Nallatech_031315_final.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,34 +0,0 @@
---
title: "Enabling financial service firms to compute heterogeneously with Gateware Defined Networking (GDN) to build order books and trade with the lowest latency."
date: "2015-01-16"
categories:
- "blogs"
---

### Abstract and Objectives

Stock, futures, and option exchanges; market makers; hedge funds; and traders require real-time  knowledge of the best bid and ask prices for the instruments that they trade. By monitoring live market data feeds and computing an order book with Field Programmable Gate Array (FPGA) logic, these firms can track the balance of pending orders for equities, futures, and options with sub-microsecond latency. Tracking the open orders by all participants ensures that the market is fair, liquidity is made available, trades are profitable, and jitter is avoided during bursts of market activity.

Algo-Logic has developed multiple Gateware Defined Networking (GDN) algorithms and components to support ultra-low-latency processing functions in heterogeneous computing systems. In this work, we demonstrate an ultralow latency order book that runs in FPGA logic in an IBM POWER8 server, which includes an ultra-low-latency 10 Gigabit/second Ethernet MAC, a market data feed handler, a fast key/value store for tracking level 3 orders, logic to sort orders, and a standard PSL interface which transfers level 2 market snapshots for multiple trading instruments into shared memory. Algo-Logic implemented all of these algorithms and components in logic on an Altera Stratix V A7 FPGA on a Nallatech CORSA card. Sorted L2 books are transferred over the IBM CAPI bus into cache lines of system memory. By implementing the entire feed processing module and order book in logic, the system enables software on the POWER8 server to directly receive market data snapshots with the least possible theoretical latency and jitter.

As a member of the Open Power Foundation (OPF), Algo-Logic provides an open Application Programming Interface (API) that allows traders to select which instruments they wish to track and how often they want snapshots to be transferred to memory. These commands, in turn, are transferred across the IBM-provided Power Service Layer (PSL) to the algorithms that run in logic on the FPGA. Thereafter, trading algorithms running in software on any of the 96 hyper-threads in a two-socket POWER8 server can readily access the market data directly from shared memory. When combined with a Graphics Processing Unit, a dual-socket POWER8 system optimally leverages the fastest computation from up to 96 CPU threads, high-throughput vector processing from hundreds of GPU cores, and the ultra-low latency from thousands of fine-grain state machines in FPGA logic to implement a truly heterogeneous solution that achieves better performance than could be achieved with homogeneous computation running only in software.

### Presenter Bio

John W. Lockwood, CEO of Algo-Logic Systems, Inc., is an expert in building FPGA-accelerated applications. He has founded three companies focused on low latency networking, Internet security, and electronic commerce and has worked at the National Center for Supercomputing Applications (NCSA), AT&T Bell Laboratories, IBM, and Science Applications International Corp (SAIC). As a professor at Stanford University, he managed the NetFPGA program from 2007 to 2009 and grew the Beta program from 10 to 1,021 cards deployed worldwide. As a tenured professor, he created and led the Reconfigurable Network Group within the Applied Research Laboratory at Washington University in St. Louis. He has published over 100 papers and patents on topics related to networking with FPGAs and served as served as principal investigator on dozens of federal and corporate grants. He holds BS,

MS, PhD degrees in Electrical and Computer Engineering from the University of Illinois at Urbana/Champaign and is a member of IEEE, ACM, and Tau Beta Pi.

### About Algo-Logic Systems

Algo-Logic Systems is a recognized leader of Gateware Defined Networking® (GDN) solutions built with Field

Programmable Gate Array (FPGA) logic. Algo-Logic uses gateware to accelerate datacenter services, lower latency in financial trading networks, and provide deterministic latency for real-time Internet devices. The company has extensive experience building datacenter switches, trading systems, and real-time data processing systems in reprogrammable logic.

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Lockwood_John-Algo-Logic_OPFS2015_031715_v4.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Lockwood_John-Algo-Logic_OPFS2015_031715_v4.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,81 +0,0 @@
---
title: "European Market Adopting OpenPOWER Technology at Accelerated Pace"
date: "2016-10-27"
categories:
- "press-releases"
- "blogs"
tags:
- "featured"
---

_Widespread Adoption of OpenPOWER Technology Across Europe for Artificial Intelligence, Deep Learning and World-Advancing Research including the Human Brain Project_

_Developer Momentum Continues with European OpenPOWER Developer Cloud, CAPI SNAP Framework Tool, OpenPOWER READY Accelerator Boards and Winners of Developer Challenge Revealed_

_European OpenPOWER Community Grows to 60 Members Strong_

Barcelona, Spain, October 27, 2016: At the inaugural OpenPOWER European Summit, the OpenPOWER Foundation made a series of announcements today detailing the rapid growth, adoption and support of OpenPOWER across the continent. Members announced:

- a series of European-based OpenPOWER technology implementations advancing corporate innovation and driving important world research including the Human Brain Project;
- a new set of developer resources, including an OpenPOWER developer cloud for European organizations and students; and
- new OpenPOWER-based solutions designed to improve performance for modern, new workloads including artificial intelligence, deep learning, accelerated analytics and high performance computing.

The [OpenPOWER Foundation](http://www.openpowerfoundation.org/) is a global technology development community with more than 270 members worldwide supporting new product design, development and implementation on top of the high performing, open POWER processor. Many of the OpenPOWER-based technologies developed by OpenPOWER members in Europe are being used to help meet the unique needs of corporations running some of the largest data centers in the world and by researchers exploring high performance computing solutions to help solve some of the worlds greatest challenges.

“Data growth in virtually every industry is forcing companies and organizations to change the way they consume, innovate around and manage IT infrastructure,” said Calista Redmond, President of the OpenPOWER Foundation. “Commodity platforms are proving ineffective when it comes to ingesting and making sense of the 2.5 billion GBs of data being created daily. With todays announcements by our European members, the OpenPOWER Foundation expands its reach, bringing open source, high performing, flexible and scalable solutions to organizations worldwide.”

**New OpenPOWER Deployments and Offerings in Europe** At the Summit, European technology leaders announced important deployments, offerings and research collaborations involving OpenPOWER-based technology. They include:

- **FRANCE** GENCI (Grand Equipement National pour le Calcul Intensif), Frances large national research facility for high performance computing, has launched a technology watch collaborative initiative to prepare French scientific communities to the challenges of exascale, and to anticipate novel architectures for future procurements of Tier1 and Tier0 in France. OpenPOWER technology has been identified as one of the leading architectures within this initiative.
- **GERMANY** In support of the Human Brain Project, a research project funded by the European Commission to advance understanding of the human brain, OpenPOWER members IBM, NVIDIA and the Juelich Supercomputing Centre [delivered a pilot system](https://openpowerfoundation.org/blogs/advancing-human-brain-project-openpower/) as part of the Pre-Commercial Procurement process. Called JURON, the new supercomputer leverages IBMs new Power S822LC for High Performance Computing system which features unique CPU-to-GPU NVIDIA NVLink technology.  As part of the system installation, the OpenPOWER members delivered to the Human Brain Project a set of key and unique research assets such as Direct Storage Class Memory Access, flexible Platform LSF extensions that allow dynamic job resizing, as well as a port of workhorse Neuroscience codes on the new OpenPOWER-based architecture.
- **SPAIN**  The Barcelona Supercomputing Center (BSC) [announced](https://www.bsc.es/about-bsc/press/bsc-in-the-media/bsc-joins-openpower-foundation) it is using OpenPOWER technology for work underway at the IBM-BSC Deep Learning Center.  At the joint center, IBM and BSC scientists are developing new algorithms to improve and expand the cognitive capabilities of deep learning systems.
- **TURKEY** SC3 Electronics, a leading cloud supercomputing center in Turkey, announced the company is creating the largest HPC cluster in the Middle East and North Africa region based on one of IBMs new OpenPOWER LC servers the Power S822LC for High Performance Computing which takes advantage of NVIDIA NVLink technology and the latest NVIDIA GPUs. According to SC3 Executive Vice President Emre Bilgi, this is an important milestone for Turkey's journey into HPC leadership.  Once installed, the cluster will be deployed internally and will also support new cloud services planned to be available by the end of the year.

These deployments come as OpenPOWER innovations around accelerated computing, storage, and networking via the high-speed interfaces of NVIDIA NVLink and the newly formed open standard OpenCAPI, gain adoption in the datacenter.

**Developer Momentum** To further support a growing demand for OpenPOWER developer resources in Europe and worldwide, OpenPOWER members announced:

- **New European developer cloud** In a significant expansion of developer resources, members of the OpenPOWER Foundation in collaboration with the [Technical University of Munich](http://www.tum.de/) at the [Department of Informatics](http://www.in.tum.de/) announced plans to launch the European arm of the development and research cloud called Supervessel. First launched in China, Supervessel is the cloud platform built on top of POWERs open architecture and technologies. It aims to provide the open remote access for all the ecosystem developers and university students. With the importance of data sovereignty in Europe, this installment of Supervessel will enable students and developers to innovate applications on the OpenPOWER platform locally, enabling individuals to create new technology while following local data regulations. Supervessel Europe is expected to launch before the end of 2016.
- **CAPI SNAP Framework** Developed by European and North American based OpenPOWER members IBM, Xilinx, Reconfigure.io, Eideticom, Rackspace, Alpha Data and Nallatech, the [CAPI SNAP Framework](https://openpowerfoundation.org/blogs/openpower-makes-fpga-acceleration-snap/) is available in beta to developers worldwide.  It is designed to make FPGA acceleration technology from the OpenPOWER Foundation easier to implement and more accessible to the developer community.
- **OpenPOWER READY FPGA Accelerator Boards** Alpha Data, a United Kingdom and North American based leading supplier of high-performance FPGA solutions, [showcased](http://www.alpha-data.com/news.php) a line of low latency, low power, OpenPOWER READY compliant FPGA accelerator boards.  The production-ready PCIe accelerator boards are intended for datacenter applications requiring high-throughput processing and software acceleration.
- **OpenPOWER Developer Challenge Winners** After evaluating the work of more than 300 developers that participated in the inaugural OpenPOWER Developer Challenge, the OpenPOWER Foundation announced [four Grand Prize winners](https://openpowerfoundation.org/blogs/openpower-developer-challenge-winners/).  The developers received a collective total of $15,000 in prizes recognizing their OpenPOWER-based development projects including:
- [Emergency Prediction on Spark](http://devpost.com/software/emergencypredictiononspark): Antonio Carlos Furtado from the University of Alberta predicts Seattle emergency call volumes with Deep Learning on OpenPOWER;
- [TensorFlow Cancer Detection](http://devpost.com/software/distributedtensorflow4cancerdetection): Altoros Labs brings a turbo boost to automated cancer detection with OpenPOWER;
- [ArtNet Genre Classifier](http://devpost.com/software/artnet-genre-classifier): Praveen Sridhar and Pranav Sridhar turn OpenPOWER into an art connoisseur; and
- [Scaling Up and Out a Bioinformatics Algorithm](http://devpost.com/software/scaling-up-and-out-a-bioinformatics-algorithm): Delft University of Technology advances precision medicine by scaling up and out on OpenPOWER.

**Expanded European Ecosystem** Across Europe, technology leaders continue to join the OpenPOWER Foundation, bringing the European roster to a total of 60 members today. Increased membership drives further innovation in areas like acceleration, networking, storage and software all optimized for the OpenPOWER platform. Some of the most recent European members to bring their expertise to the broader OpenPOWER ecosystem in 2016 include:

- from Belgium Calyos
- from France GENCI, Splitted-Desktop Systems
- from Germany IndependIT Integrative Technologies, LRZ, Paderborn University, Technical University of Munich, ThinkParQ, Thomas-Kren AG
- from Greece University of Peloponnese
- from The Netherlands Delft University of Technology, Synerscope
- from Norway Dolphin Interconnect Solutions
- from Russia Cognitive Technologies
- from Spain Barcelona Supercomputing Center
- from Switzerland Groupe T2i SA, Kolab Systems AG
- from Turkey SC3 Electronics
- from the United Kingdom Quru, Reconfigure.io, University of Exeter, University of Oxford

**About the OpenPOWER Foundation** The OpenPOWER Foundation is a global, open development membership organization formed to facilitate and inspire collaborative innovation on the POWER architecture. OpenPOWER members share expertise, investment and server-class intellectual property to develop solutions that serve the evolving needs of technology customers.

The OpenPOWER Foundation enables members to customize POWER CPU processors, system platforms, firmware and middleware software for optimization for their business and organizational needs. Member innovations delivered and under development include custom systems for large scale data centers, workload acceleration through GPU, FPGA or advanced I/O, and platform optimization for software appliances, or advanced hardware technology exploitation. For further details visit [www.openpowerfoundation.org](http://www.openpowerfoundation.org).

\# # #

Media Contact: Crystal Monahan Text100 for OpenPOWER Tel: +1 617.399.4921 Email: [crystal.monahan@text100.com](mailto:crystal.monahan@text100.com)

**Supporting Quotes from OpenPOWER Foundation European Members**

**Barcelona Supercomputing Center** "We feel honored to become a member of the OpenPOWER Foundation,” said Mateo Valero, Director of the Barcelona Supercomputing Center. “Working closely with the OpenPOWER community will give us the opportunity to collaborate with other leading institutions in high performance architectures, programming models and applications.”

**Cognitive Technologies** “We see OpenPOWER technology and innovation as key enablers for our Autonomous Driving technology and Neural Network capability,” said Andrey Chernogorov, CEO of Cognitive Technologies, an active driver assistance systems developer. “We believe that our major competitive advantage is the robust artificial intelligence that our system is based on. It makes it possible for the autonomous vehicle control system to firmly operate in bad weather conditions and on bad or damaged roads with no road marking. Since over 70% of the roads in the world can be considered as bad we plan to become a global market leader. At the moment our major competitor is the Israeli developer Mobileye.”

**Jülich Supercomputing Centre** “For a leading provider of computing resources for science, OpenPOWER is an exciting opportunity to create future supercomputing infrastructure and enable new science,” said Dr. Dirk Pleiter, Research Group Leader, Jülich Supercomputing Centre.

**SC3** “Having seen foremost Internet giants starting up the OpenPOWER Foundation even before the vast wide and deep global hardware (including CPU, GPU, Memory, NVM, Networking, FPGA, ODM's) community, software (OS and Applications) providers and services industry, as well as academic and scientific who's who institutions, become a truly impressive ecosystem, convinced us to join and and contribute to this great organization with high enthusiasm,” said SC3 Executive Vice President Emre Bilgi.   “After a global search of over two years for our supercomputing architecture, we see great opportunities in the OpenPOWER Foundation today and in the future."

**ThinkParQ** "It is very important for our customers that BeeGFS delivers highest I/O performance and takes full advantage of the latest technologies,” said ThinkParQ CEO Sven Breuner. “The OpenPOWER platform comes with outstanding performance features and has a very promising roadmap, which make it an ideal basis for such demanding applications."

![openpower_europe_slide-02_02-1](images/OpenPOWER_Europe_Slide-02_02-1.jpg)

@ -1,126 +0,0 @@
---
title: "Evaluating Julia for Deep Learning on Power Systems + NVIDIA Hardware"
date: "2016-11-14"
categories:
- "blogs"
tags:
- "featured"
---

_By Deepak Vinchhi, Co-Founder and Chief Operating Officer, Julia Computing, Inc._

Deep Learning is now ubiquitous in the machine learning world, with useful applications in a number of areas. In this blog post, we explore the use of Julia for deep learning experiments on Power Systems + NVIDIA hardware.

We shall demonstrate:

1. The ease of specifying deep neural network architectures in Julia and visualizing them. We use MXNet.jl, a Julia package for deep learning.
2. The ease of running Julia on Power Systems. We ran all our experiments on a PowerNV 8335-GCA, which has 160 CPU cores, and a Tesla K80 (dual) GPU accelerator. IBM and [OSUOSL](http://osuosl.org/) have generously provided us with the infrastructure for this analysis.

## **Introduction**

Deep neural networks have been around since the [1940s](http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/neural4.html), but have only recently been deployed in research and analytics because of strides and improvements in computational horsepower. Neural networks have a wide range of applications in machine learning: vision, speech processing, and even [self-driving cars](https://blogs.nvidia.com/blog/2016/06/10/nyu-nvidia/). An interesting use case for neural networks could be the ability to drive down costs in medical diagnosis. Automated detection of diseases would be of immense help to doctors, especially in places around the world where access to healthcare is limited.

[Diabetic retinopathy](https://en.wikipedia.org/wiki/Diabetic_retinopathy) is an eye disease brought on by diabetes. There are over 126.2 million people in the world (as of 2010) with diabetic retinopathy, and this is [expected](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3491270/) to rise to over 191.2 million by 2030. According to the WHO in 2006, it [accounted](http://www.who.int/blindness/causes/priority/en/index5.html) for 5% of world blindness.

Hence, early automatic detection of diabetic retinopathy would be desirable. To that end, we took up an image classification problem using real clinical data. The data was provided to us by [Drishti Care](http://drishticare.org), which is a social enterprise that provides affordable eye care in India. We obtained a number of eye [fundus](https://en.wikipedia.org/wiki/Fundus_(eye)) images from a variety of patients. The eyes affected by retinopathy are generally marked by inflamed veins and cotton spots. The following picture on the left is a normal fundus image whereas the one on the right is affected by diabetic retinopathy.

![julia-1](images/Julia-1.png)

## **Setup**

We built MXNet from source with CUDA and OpenCV. This was essential for training our networks on GPUs with CUDNN, and reading our image record files. We had to build GCC 4.8 from source so that our various libraries could compile and link without error, but once we did, we were set up and ready to start working with the data.

## **The Hardware: IBM Power Systems**

We chose to run this experiment on an IBM Power System because, at the time of this writing, we believe it is the best environment available for this sort of work. The Power platform is ideal for deep learning, big data, and machine learning due to its high performance, large caches, 2x-3x higher memory bandwidth, very high I/O bandwidth, and of course, tight integration with GPU accelerators. The parallel multi-threaded Power architecture with high memory and I/O bandwidth is particularly well adapted to ensure that GPUs are used to their fullest potential.

Were also encouraged by the industrys commitment to the platform, especially with regard to AI, noting that NVIDIA made its premier machine learning-focused GPU (the Tesla P100) available on Power well before the x86, and that innovations like NVLink are only available on Power.

## **The Model**

The idea is to train a deep neural network to classify all these fundus images into infected and uninfected images. Along with the fundus images, we have at our disposal a number of training labels identifying if the patient is infected or not.

We used [MXNet.jl](https://github.com/dmlc/MXNet.jl), a powerful Julia package for deep learning. This package allows the user to use a high level syntax to easily specify and chain together large neural networks. One can then train these networks on a variety of heterogeneous platforms with multi-GPU acceleration.

As a first step, its good to load a pretrained model which is known to be good at classifying images. So we decided to download and use the [ImageNet model called Inception](https://research.googleblog.com/2016/03/train-your-own-image-classifier-with.html) with weights in their 39th epoch. On top of that we specify a simple classifier.

\# Extend model as we wish
arch = mx.@chain mx.get\_internals(inception)\[:global\_pool\_output\] =>
mx.Flatten() =>
mx.FullyConnected(num\_hidden = 128) =>
mx.Activation(act\_type=:relu) =>
mx.FullyConnected(num\_hidden = 2) =>
mx.WSoftmax(name = :softmax)

And now we train our model:

mx.fit(
 model,
 optimizer,
 dp,
 n\_epoch = 100,
 eval\_data = test\_data,
 callbacks = \[
 mx.every\_n\_epoch(save\_acc, 1, call\_on\_0=false),
 mx.do\_checkpoint(prefix, save\_epoch\_0=true),
 \],
eval\_metric = mx.MultiMetric(\[mx.Accuracy(), WMultiACE(2)\])
)

One feature of the data is that it is highly [imbalanced](http://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/). For every 200 uninfected images, we have only 3 infected images. One way of approaching that scenario is to penalize the network heavily for every infected case it gets wrong. So we replaced the normal Softmax layer towards the end of the network with a _weighted_ softmax. To check whether we are overfitting, we selected multiple [performance metrics](http://machinelearningmastery.com/classification-accuracy-is-not-enough-more-performance-measures-you-can-use/).

However, from our [cross-entropy](https://www.wikiwand.com/en/Cross_entropy) measures, we found that we were still overfitting. With fast training times on dual GPUs, we trained our model quickly to understand the drawbacks of our current approach.

\[caption id="attachment\_4362" align="aligncenter" width="625"\]![Performance Comparison between CPU and GPU on Training](images/julia-2-1024x587.png) Performance Comparison between CPU and GPU on Training\[/caption\]

Therefore we decided to employ a different approach.

The second way to deal with our imbalanced dataset is to generate smaller, more balanced datasets that contained roughly equal numbers of uninfected images and infected images. We produced two datasets: one for training and another for cross validation, both of which had the same number of uninfected and infected patients.

Additionally, we also decided to shuffle our data. Every epoch, we resampled the uninfected images from the larger pool of uninfected images (and they were many in number) in the training dataset to expose the model to a range of uninfected images so that it can generalize well. Then we started doing the same to the infected images. This was quite simple to implement in Julia: we simply had to overload a particular function and modify the data.

Most of these steps were done incrementally. Our Julia setup and environment made it easy for us to quickly change code and train models and incrementally add more tweaks and modifications to our models as well as our training methods.

We also augmented our data by adding low levels of Gaussian noise to random images from both the uninfected images and the infected images. Additionally, some images were randomly rotated by 180 degrees. Rotations are quite ideal for this use case because the important spatial features would be preserved. This artificially expanded our training set.

However, we found that while these measures stopped our model from overfitting, we could not obtain adequate performance. We explore the possible reason for this in the subsequent section.

## **Challenges**

Since the different approaches we outlined in the previous section were easy to implement within our Julia framework, our experimentation could be done quickly and these various challenges were easy to pinpoint.

The initial challenge we faced was that our data is imbalanced, and so we experimented with penalizing incorrect decisions made by the classifier. We tried generating a balanced (yet smaller) dataset in the first place and then it turned out that we were overfitting. To counter this, we performed the shuffling and data augmentation techniques. But we didnt get much performance from the model.

Why is that so? Why is it that a model as deep as Inception wasnt able to train effectively on our dataset?

The answer, we believe, lies in the data itself. On a randomized sample from the data, we found that there were two inherent problems with the data: firstly, there are highly blurred images with no features among both healthy and infected retinas.

\[caption id="attachment\_4363" align="alignnone" width="300"\]![Images such as these make it difficult to extract features](images/Julia-3-300x225.png) Images such as these make it difficult to extract features\[/caption\]

Secondly, there are some features in the healthy images that one might expect to find in the infected images. For instance, in some images the veins are somewhat puffed, and in others there are cotton spots. Below are some examples. While we note that the picture on the left is undoubtedly infected, notice that one on the right also has a few cotton spots and inflamed veins. So how does one differentiate? More importantly, how does our model differentiate?

![julia-4](images/Julia-4.png)

So what do we do about this? For the training set, it would be helpful to have each image, rather than each patient, independently diagnosed as healthy or infected by a doctor or by two doctors working independently. This would likely improve the models predictions.

## **The Julia Advantage**

Julia provides a distinct advantage at every stage for scientists engaged in machine learning and deep learning.

First, Julia is very efficient at preprocessing data. A very important first step in any machine learning experiment is to organize, clean up and preprocess large amounts of data. This was extremely efficient in our Julia environment, which is known to be orders of magnitude faster in comparable environments such as Python.

Second, Julia enables elegant code. Our models were chained together using Julias flexible syntax. Macros, metaprogramming and syntax familiar to users of any technical environment allows for easy-to-read code.

Third, Julia facilitates innovation. Since Julia is a first-class technical computing environment, we can easily deploy the models we create without changing any code. Julia hence solves the famous “two-language” problem, by obviating the need for different languages for prototyping and production.

Due to all the aforementioned advantages, we were able to complete these experiments in a very short period of time compared with other comparable technical computing environments.

## **Call for Collaboration**

We have demonstrated in this blog post how to write an image classifier based on deep neural networks in Julia and how easy it is to perform multiple experiments. Unfortunately, there are challenges with the dataset that required more fine-grained labelling. We have reached out to appropriate experts for assistance in this regard.

Users who are interested in working with the dataset and possibly collaborating on this with us are invited to reach out via email to [ranjan@juliacomputing.com](mailto:ranjan@juliacomputing.com) to discuss access to the dataset.

## **Acknowledgements**

I should thank a number of people for helping me with this work: [Valentin Churavy](https://github.com/vchuravy) and [Pontus Stenetorp](https://github.com/ninjin) for guiding and mentoring me, and [Viral Shah](https://github.com/ViralBShah) of Julia Computing. Thanks to IBM and OSUOSL too for providing the hardware, as well as Drishti Care for providing the data.

@ -1,30 +0,0 @@
---
title: "Exascale Simulations of Stellar Explosions with FLASH on Summit"
date: "2019-01-24"
categories:
- "blogs"
tags:
- "featured"
---

_Featuring OpenPOWER Member: [Oak Ridge National Laboratory](https://www.ornl.gov/)_

By [Ganesan Narayanasamy](https://www.linkedin.com/in/ganesannarayanasamy/), senior technical computing solution and client care manager, IBM

At the [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/), developers using OpenPOWER platforms shared case studies on the work theyre doing using OpenPOWER platforms. One particularly interesting session was led by [James Austin Harris](https://www.olcf.ornl.gov/directory/staff-member/james-harris/), postdoctoral research associate and member of the FLASH Center For Accelerated Application Readiness (CAAR) project at Oak Ridge National Laboratory (ORNL).

Harris and his group at ORNL study supernovae and their nucleosynthetic products to improve our understanding of the origins of the heavy elements in nature. His session focused on exascale simulations of stellar explosions using FLASH. FLASH is a publicly available, component-based MPI+OpenMP parallel, adaptive mesh refinement (AMR) code that has been used on a variety of parallel platforms from astrophysics, high-energy-density physics, and more. Its ideal for studying nucleosynthesis in supernovae due to its multi-physics and AMR capabilities.

The work is primarily focused on increasing physical fidelity by accelerating the nuclear burning module and associated load balancing. And using [Summit](https://www.olcf.ornl.gov/summit/), [the most powerful supercomputer in the world](https://www.top500.org/news/us-regains-top500-crown-with-summit-supercomputer-sierra-grabs-number-three-spot/), had an enormous impact.

Summit GPU performance fundamentally changes the potential science impact by enabling large-network (160 or more nuclear species) simulations. Preliminary results on Summit indicate the time for 160-species run on Summit was roughly equal to 13-species previously run on Titan. In other words, greater than 100x the computation at an identical cost.

Overall the CAAR group has had a very positive experience with Summit, and still have more work to do including exploring hydrodynamics, gravity and radiation transport.

View Harris full session video and slides below.

https://www.youtube.com/watch?v=5e6IUzl6A6Q

<iframe style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 5px; max-width: 100%;" src="//www.slideshare.net/slideshow/embed_code/key/xZtUdi7A6afbi" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe>

**[Towards Exascale Simulations of Stellar Explosions with FLASH](//www.slideshare.net/ganesannarayanasamy/towards-exascale-simulations-of-stellar-explosions-with-flash "Towards Exascale Simulations of Stellar Explosions with FLASH")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)**

@ -1,71 +0,0 @@
---
title: "Exploring the Fundamentals of OpenPOWER, POWER9 and PowerAI at the University of Reims"
date: "2019-06-25"
categories:
- "blogs"
tags:
- "featured"
- "power9"
- "ibm-power-systems"
- "barcelona-supercomputing-center"
- "powerai"
- "ebv-elektronik"
---

By Professor Michaël Krajecki, Université de Reims Champagne-Ardenne

Last month, the University of Reims hosted a workshop introducing the fundamentals of the OpenPOWER Foundation, POWER9 and PowerAI. Students and faculty from the University were joined by experts from [IBM POWER Systems](https://www.ibm.com/it-infrastructure/power), [EBV Elektronik](https://www.avnet.com/wps/portal/ebv/) and the [Barcelona Supercomputing Center](https://www.bsc.es/) for a great session!

![](images/Reims.png)

 

 

 

 

 

 

 

 

 

 

 

Multiple topics relating to POWER9, deep learning and PowerAI were discussed.

- - [Thibaud Besson](https://fr.linkedin.com/in/thibaud-besson-3476b42b), IBM Power Systems: **Fundamentals of OpenPOWER Foundation, POWER9 and PowerAI**: Besson discussed why POWER9 is the most state-of-the-art computing architecture developed with AI workloads in mind. He also showcased PowerAI, the software side of the solution, explaining its ease of use and straightforward installation that reduces time to market for implementors.

 

- - [Franck Maul](https://fr.linkedin.com/in/franck-maul-76bba74), EBV Elektronik: **On Xilinx Offerings**: Maul presented Xilinx products that are going to revolutionize the AI market in the near future, explaining why Xilinxs offering is the best fit for customers in the current market. He also showed off Xilinx FPGAs, emphasizing their perfect fit with IBM AC922 servers.

 

- - [Dr. Guillaume Houzeauk](https://www.linkedin.com/in/guillaume-houzeaux-0079b02/?originalSubdomain=es), The Barcelona Supercomputing Center: **How Fluid Dynamics Can Be Implemented on POWER9 and AC922 Servers**: In one of the days more technical sessions, BSC examined how a major Spanish car manufacturer has implemented Fluid Dynamics in a cluster of AC922 servers to improve automotive design and to reduce product cost and cycle time.

 

- - Ander Ochoa Gilo, IBM: **Distributed Deep Learning and Large Model Support**: Ochoa Gilo dove into the benefits of deep learning, showing not only how we can overcommit the memory of the GPUs both in Caffe and Tensorflow, but also how to implement it. Using live examples, Ochoa Gilo explained how deep learning is accelerated through AC922 servers, allowing users to select images with up to 10x more resolution vs x86 alternatives.

 

He also demonstrated another useful feature of PowerAI, distributed deep learning, which allows a model to be trained on two servers using RDMA connectivity between the memory of the AC922 servers, reducing training time. Finally, Ochoa Gilo showcased the SnapML framework, which allows non-deep learning models to be accelerated by the GPUs, reducing the training time by 4X. He ran live examples that demonstrated its effectiveness right out of the box some researchers in the room were so impressed by the framework that they implemented it in their clusters before the demonstration ended!

 

- - [Thibaud Besson](https://fr.linkedin.com/in/thibaud-besson-3476b42b), IBM POWER Systems: **PowerAI Vision, CAPI and OpenCAPI Interface to FPGA on POWER**: Thibaud Besson returned to explain why PowerAI Vision is a fundamental solution for companies that cannot afford to hire the worlds best data scientists. In a live example, he created a dataset from scratch, ran a training and then put it into production. The dataset was able to be monetized in minutes, offering the usefulness of the model to any software that can make an API REST call.

 

To wrap up, Besson explained the usefulness of being an open architecture, diving into CAPI and OpenCAPI and the benefits of using it in I/O intensive workloads.

AI is a key topic of interest for the University of Reims and its partners as further projects out of the University explore AI in agriculture and viticulture. As such, participants learned more about OpenPOWER and AI, and speakers in return were able to better understand the needs of our local researchers. All in all, the workshop was well-received and highly engaging. Thank you to everyone who participated!

@ -1,28 +0,0 @@
---
title: "Exploring the Power of New Possibilities"
date: "2019-08-19"
categories:
- "blogs"
tags:
- "openpower"
- "ibm"
- "google"
- "summit"
- "wistron"
- "openpower-foundation"
- "red-hat"
- "inspur"
- "hitachi"
- "yadro"
- "raptor"
- "sierra"
- "infographic"
---

By Hugh Blemings, Executive Director, OpenPOWER Foundation

In the six years since its creation, the OpenPOWER Foundation has facilitated our members combining their unique technologies and expertise, and through this enabled some major breakthroughs in modern computing. With more than 350 members from all around the world and from all layers of the hardware/software stack, together were opening doors to a new level of open.

While we kick off OpenPOWER Summit North America today and look ahead to the next frontier, its also important to reflect on all that weve accomplished to date. Explore some of the milestones in the infographic below!

![](images/9034_IBMPower_OpenPOWERInfographic_080519.png)

@ -1,41 +0,0 @@
---
title: "Express Ethernet Technology Solves for Big Data Variances"
date: "2019-01-23"
categories:
- "blogs"
tags:
- "featured"
---

_Featuring OpenPOWER member: [NEC](https://in.nec.com/)_

By: [Deepak Pathania](https://www.linkedin.com/in/deepak-pathania-3aa4a938/)­­­­­, Senior Technical Leader, NEC Technologies India

I recently had the honor of speaking at the [3rd OpenPOWER Academic Discussion Group Workshop](https://www.linkedin.com/pulse/openpower-3rd-academia-workshop-updates-ganesan-narayanasamy/). I spoke alongside more than 40 other developers and researchers on my work with [NEC](https://in.nec.com/).

My session focused on how at NEC, we explored solutions to common problems of two types of remote capabilities including ubiquitous computing and IoT solutions. Our solution was to extend the PCLe switch for Ethernet and in doing so, we discovered a new way of looking at connecting multiple PCLe devices remotely.

**The Problem: Variances of Big Data**

Accelerators allow for real-time results for analytics, however there is a problem with having an interconnect that connects all architects together. This can result in lower accuracy in values. Another part of this problem is the high demand of Big Data. Not only is there a high demand of analyzing this data,  but results are wanted in real-time.

**The Solution: Express Ethernet Technology**

Express Ethernet is the PC extension of Ethernet, which removes the PCLe slots out of the computer and extends it over Ethernet. This eliminates performance lag, giving the user two capabilities: distance and switching. Distance allows the user to extend connection over two kilometers and the switching capability allows for alternating between different types of hardware, all without the need to modify existing hardware or software.

In summation, the Express Ethernet system allows us to have the next generation computer hardware architectures because the system:

- Allows distance or length with dynamic switching capability
- Provides same or similar performance of local versus remotely located IOs
- Moves within the chassis devices outside with plug and play ability
- Makes legacy devices useful and cost-effective system realization

To learn more about Express Ethernet technology and the work being done at NEC, view the full video session and presentation below.

https://www.youtube.com/watch?v=lTaBIhgiNB4

 

<iframe style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 5px; max-width: 100%;" src="//www.slideshare.net/slideshow/embed_code/key/ID1BEDPhGvQHyi" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe>

**[PCI Express switch over Ethernet or Distributed IO Systems for Ubiquitous Computing and IoT Solutions](//www.slideshare.net/ganesannarayanasamy/pci-express-switch-over-ethernet-or-distributed-io-systems-for-ubiquitous-computing-and-iot-solutions "PCI Express switch over Ethernet or Distributed IO Systems for Ubiquitous Computing and IoT Solutions")** from **[Ganesan Narayanasamy](https://www.slideshare.net/ganesannarayanasamy)**

@ -1,38 +0,0 @@
---
title: "Managing Reconfigurable FPGA Acceleration in a POWER8-based Cloud with FAbRIC"
date: "2016-05-06"
categories:
- "blogs"
tags:
- "featured"
---

_By Xiaoyu Ma, PhD Candidate, University of Texas at Austin_

_This post is the first in a series profiling the work developers are doing on the OpenPOWER platform. We will be posting more from OpenPOWER developers as we continue our [OpenPOWER Developer Challenge](http://openpower.devpost.com)._

![tacc](images/tacc.png)

FPGAs (Field-Programmable Gate Array) are becoming prevalent. Top hardware and software vendors have started making it a standard to incorporate FPGAs into their compute platforms for performance and power benefits. IBM POWER8 delivers CAPI (Coherent Accelerator Processor Interface) to enable FPGA devices to be coherently attached on the PCIe bus. Industries from banking and finance, retail, [healthcare](https://openpowerfoundation.org/blogs/genomics-with-apache-spark/) and many other fields are exploring the benefits of [FPGA-based acceleration](https://openpowerfoundation.org/blogs/capi-drives-business-performance/) on the OpenPOWER platform.

## FPGAs in the Cloud

When it comes to cloud compute, in-cloud FPGAs are appealing due to the combined benefits of both FPGAs and clouds. On one hand, FPGAs improve cloud performance and save power by orders of magnitude. On the other hand, the cloud infrastructure reduces cost per compute by resource sharing and large-scale FPGA system access without the user needing to own and manage the system. Furthermore, cloud enables a new level of collaboration as the identical underlying infrastructure makes it easier for users of the same cloud to share their work, to verify research ideas, and to compare experimental results.

While clouds with FPGAs are available in companies like IBM, there are, however, few FPGA clouds available for public, especially academic, use. To target this problem, we created [FAbRIC](https://wikis.utexas.edu/display/fabric/Home) (FPGA Research Infrastructure Cloud) a project led by Derek Chiou at The Unviersity of Texas at Austin. It enables FPGA research and development on large-scale systems by providing FPGA systems, tools, and servers to run tools in a cloud environment. Currently all FAbRIC clusters are equipped with reconfigurable fabric to run FPGA accelerated workloads. To be available for open use, FAbRIC systems are placed in the [Texas Advanced Computing Center](https://www.tacc.utexas.edu/systems/fabric) (TACC), the supercomputer center of The University of Texas at Austin.

![FaBRIC post](images/FaBRIC-post-1.jpg)

## Using FPGAs with FAbRIC

The FAbRIC POWER8+CAPI system (Figure A) is a cluster of several x86 servers and nine POWER8 servers. The x86 nodes serve as the gateway node, the file server and build machines for running FPGA tools. Each POWER8 node is a heterogeneous compute platform equipped with three accelerating devices (Figure b): a Nallatech 385 A7 Stratix V FPGA adapter, an Alpha-data 7V3 Virtex7 Xilinx-based FPGA adapter and a NVIDIA Tesla K40m GPGPU card. FPGA boards are CAPI-enabled to provide coherent shared memory between the processor and accelerators.

To use FPGA accelerators on POWER8 nodes, the user will design the FPGA accelerator source code typically in RTL such as Verilog or VHDL, push it through the FPGA compiler, program the FPGA with the generated FPGA configuration image and run with host programs. In addition to the conventional RTL design flow which has low programmability, Bluespec System Verilog and High-level Synthesis flows including OpenCL and Xilinx Vivado C-to-Gate are offered as alternatives to RTL in the synthesis of FPGA accelerators. Such flows allow users to abstract away the traditional hardware FPGA development flow for a higher level software development flow and therefore reduce the FPGA accelerator design cycle.

## Weaving FAbRIC

After months of work to ensure in-cloud FPGAs are manageable, which we discovered to be nontrivial since opening close to the metal access with reconfigurability creates vulnerabilities, FAbRIC POWER8+CAPI is up and available to the public research community upon request. Our early “family and friend” users have been running real-world applications reliably and generating promising results for their research projects. As another use case of the system, IBM will launch a CAPI design contest in the late spring of 2016.

* * *

_About Xiaoyu Ma Xiaoyu Ma is a PHD candidate of the Department of Electrical and Computer Engineering at The University of Texas at Austin. He is advised by Prof. Derek Chiou. His research areas include FPGA-based hardware specialization, hardware design programming models, FPGA cloud infrastructure and microprocessor architecture. He is also an employee of the Large Scale System group at Texas Advanced Computing Center, serving as the lead system administrator for the FPGA Research Infrastructure Cloud (FAbRIC) project._

@ -1,105 +0,0 @@
---
title: "Final Draft of the Power ISA EULA Released"
date: "2020-02-13"
categories:
- "blogs"
tags:
- "ibm"
- "power-isa"
- "microwatt"
- "eula"
- "chiselwatt"
- "end-user-license-agreement"
---

**By: Hugh Blemings**

On August 20, 2019 the OpenPOWER Foundation, along with IBM, announced that the POWER ISA was to be released under an open license. You can read more about it in [previous posts](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) but the short story is that anyone is now free to build their own POWER ISA compliant chips, ASICs, FPGAs etc. without paying a royalty and with a “pass through” patent license from IBM for anything that pertains to the ISA itself.  On top of this of course an ability to contribute to the ISA as well through a Workgroup were standing up within the OpenPOWER Foundation.

Microwatt and Chiselwatt are just two examples of implementations that come under this license and there are rumblings about some others, including credible discussions around SoCs based on the ISA. Exciting times ahead!

Weve had some questions about what the actual End User License Agreement (EULA) will look like and were pleased to present a final draft of it below.  If youve questions or feedback please do get in touch. The details of the associated Workgroup is being finalised by the board, more to follow on that too. :)

## **FINAL DRAFT - Power ISA End User License Agreement - FINAL DRAFT**

“Power ISA” means the microprocessor instruction set architecture specification version provided to you with this document. By exercising any right under this End User License Agreement, you (“Recipient”) agree to be bound by the terms and conditions of this Power ISA End User License (“Agreement”).

All information contained in the Power ISA is subject to change without notice. The products described in the Power ISA are NOT intended for use in applications such as implantation, life support, or other hazardous uses where malfunction could result in death, bodily injury, or catastrophic property damage.

**Definitions**

“Architectural Resources” means assignable resources necessary for elements of the Power ISA to interoperate, including, but not limited to: opcodes, special purpose registers, defined registers, reserved bits in existing defined registers, control table fields and bits, and interrupt vectors.

“Compliancy Subset” means a portion of the Power ISA, defined within the Power ISA, which must be implemented to ensure software compatibility across Power ISA compliant devices.

“Contribution” means any work of authorship that is intentionally submitted to OPF for inclusion in the Power ISA by the copyright owner or by an individual or entity authorized to submit on behalf of the copyright owner. Without limiting the generality of the preceding sentence, RFCs will be considered Contributions.

“Custom Extensions” means additions to the Power ISA in a designated subset of Architectural Resources defined by the Power ISA. For clarity, Custom Extensions are not Contributions.

"Integrated Circuit" shall mean an integral unit including a plurality of active and passive circuit elements formed at least in part of semiconductor material arranged on or in a chip(s) or substrate.

“OPF” means The OpenPOWER Foundation.

“Licensed Patent Claims” means patent claims:

(a) licensable by or through OPF; and

(b) which, but for this Agreement, would be necessarily infringed by the use of the Power ISA in making, using, or otherwise implementing a Power Compliant Chip.

“Party(ies)” means Recipient or OPF or both.

“OpenPOWER ISA Compliance Definition” means the validation procedures associated with architectural compliance developed, delivered, and maintained by OPF as specified in the following link: [https://openpowerfoundation.org/?resource\_lib=openpower-isa-compliance-definition](https://openpowerfoundation.org/?resource_lib=openpower-isa-compliance-definition).

“Power Compliant” means an implementation of (i) one of the Compliancy Subsets alone or (ii) one of the Compliancy Subsets together with selected permitted combinations of additional instructions and/or facilities within the Power ISA, in the case of clauses (i) and (ii), provided that such implementation meets the corresponding portions of the OpenPOWER ISA Compliance Definition.

“Power ISA Core” means an implementation of the Power ISA that is represented by software, a hardware description language (HDL), or an Integrated Circuit design, but excluding physically implemented chips  (such as microprocessors, system on a chips, or field-programmable gate arrays FPGAs)); provided that such implementation is primarily designed to be included as part of software, a hardware description language (HDL), or an Integrated Circuit design that are in each case Power Compliant, regardless of whether such implementation, independently, is Power Compliant.

“Power Compliant Chip” means a Power Compliant physical implementation of one or more Power ISA Cores into one or more Integrated Circuits, including, for example, in a microprocessor, system on a chip, or a field-programmable gate array (FPGA), provided that all portions of such physical implementation are Power Compliant.

“Request for Change (RFC)” means any request for change in the Power ISA as a whole, or a change in the definition of a Compliancy Subset provided in the Power ISA.

1. **Grant of Rights**

Solely for the purposes of developing and expanding the Power ISA and the POWER ecosystem, and subject to the terms of this Agreement:

1.1 OPF grants to Recipient a nonexclusive, worldwide, perpetual, royalty-free, non-transferable license under all copyrights licensable by OPF and contained in the Power ISA to a) develop technology products compatible with the Power ISA, and b) create, use, reproduce, perform, display, and distribute Power ISA Cores.

1.2 OPF grants to Recipient the right to license Recipient Power ISA Cores under the Creative Commons Attribution 4.0 license.

1.3 OPF grants to Recipient the right to sell or license Recipient Power ISA Cores under independent terms that are consistent with the rights and licenses granted under this Agreement. As a condition of the license grant under this section 1.3, the Recipient must either provide the Power ISA with this Agreement to the downstream recipient, or provide notification for the downstream recipient to obtain the Power ISA and this Agreement to have appropriate patent licenses to implement the Power ISA Core as a Power Compliant Chip. It is clarified that no rights are to be granted under this Section 1.3 beyond what is expressly permitted by this Agreement.

1.4 Notwithstanding Sections 1.1 through 1.3 above, Recipient shall not have the right or license to create, use, reproduce, perform, display, distribute, sell, or license the Power ISA Core in a physically implemented chip (including a microprocessor, system on a chip, or a field-programmable gate array (FPGA)) that is not Power Compliant, nor to license others to do so.

1.5 OPF grants to Recipient a nonexclusive, worldwide, perpetual, royalty-free, non-transferable license under Licensed Patent Claims to make, use, import, export, sell, offer for sale, and distribute Power Compliant Chips.

1.6 If Recipient institutes patent litigation or an administrative proceeding (including a cross-claim or counterclaim in a lawsuit, or a United States International Trade Commission proceeding) against OPF, OPF members, or any third party entity (including but not limited to any third party that made a Contribution) alleging infringement of any Recipient patent by any version of the Power ISA, or the implementation thereof in a CPU design, IP core, or chip, then all rights, covenants, and licenses granted by OPF to Recipient under this Agreement shall terminate as of the date such litigation or proceeding is initiated.

1.7 Without limiting any other rights or remedies of OPF, if Recipient materially breaches the terms of this Agreement, OPF may terminate this Agreement at its discretion.

2. **Modifications to the Power ISA**

2.1 Recipient shall have the right to submit Contributions to the Power ISA through a prospectively authorized process by OPF, but shall not implement such Contributions until fully approved through the prospectively authorized OPF process.

2.2 Recipient may create Custom Extensions as described and permitted in the Power ISA. Recipient is encouraged, but not required, to bring their Custom Extensions through the authorized OPF process for contributions. For clarity, Custom Extensions cannot be guaranteed to be compatible with another third partys Custom Extensions.

3. **Ownership**

3.1 Nothing in this Agreement shall be deemed to transfer to Recipient any ownership interest in any intellectual property of OPF or of any contributor to the Power ISA, including but not limited to any copyrights, trade secrets, know-how, trademarks associated with the Power ISA or any patents, registrations or applications for protection of such intellectual property.

3.2 Recipient retains ownership of all incremental work done by Recipient to create Power ISA Cores and Power Compliant Chips, subject to the ownership rights of OPF and any contributors to the Power ISA. Nothing in this Agreement shall be deemed to transfer to OPF any ownership interest in any intellectual property of Recipient, including but not limited to any copyrights, trade secrets, know-how, trademarks, patents, registrations or applications for protection of such intellectual property.

4. **Limitation of Liability**

4.1 THE POWER ISA AND ANY OTHER INFORMATION CONTAINED IN OR PROVIDED UNDER THIS DOCUMENT ARE PROVIDED ON AN “AS IS” BASIS. OPF makes no representations or warranties, either express or implied, including but not limited to, warranties of merchantability, fitness for a particular purpose, or non-infringement, or that any practice or implementation of the Power ISA or other OPF documentation will not infringe any third party patents, copyrights, trade secrets, or other rights. In no event will OPF or any other person or entity submitting any Contribution to OPF be liable for damages arising directly or indirectly from any use of the Power ISA or any other information contained in or provided under this document.

5. **Compliance with Law**

5.1 Recipient shall be responsible for compliance with all applicable laws, regulations and ordinances, and will obtain all necessary permits and authorizations applicable to the future conduct of its business involving the Power ISA. Recipient agrees to comply with all applicable international trade laws and regulations such as export controls, embargo/sanctions, antiboycott, and customs laws related to the future conduct of the business involving the Power ISA to be transferred under this Agreement. Recipient warrants that it is knowledgeable with, and will remain in full compliance with, all applicable export controls and embargo/sanctions laws, regulations or rules, orders, and policies, including but not limited to, the U.S. International Traffic in Arms Regulations (“ITAR”), the U.S. Export Administration Regulations (“EAR”), and the regulations of the Office of Foreign Assets Control (“OFAC”), U.S. Department of Treasury.

6. **Choice of Law**

6.1 This Agreement is governed by the laws of the State of New York, without regard to the conflict of law provisions thereof.

7. **Publicity**

7.1 Nothing contained in these terms shall be construed as conferring any right to use in advertising, publicity or other promotional activities any name, trade name, trademark or other designation of any Party hereto (including any contraction, abbreviation or simulation of any of the foregoing).

@ -1,26 +0,0 @@
---
title: "FPGA Acceleration in a Power8 Cloud"
date: "2015-01-19"
categories:
- "blogs"
---

### Abstract

OpenStack is one of the popular software that people use to run a cloud. It managers hardware resources like memory, disks, X86 and POWER processors and then provide IaaS to users. Based on existing OpenStack, more kinds of hardware resource can also be managed by OpenStack and be provided to users, like GPU and FPGA. FPGA has been widely used for many kinds of applications, and POWER8 processor has integrated an innovated interface called CAPI (Coherent Accelerator Processor Interface) for direct connection between FPGA and POWER8 chip. CAPI not only provides low latency, high bandwidth and cache coherent interconnection between users accelerator hardware and the application software, but also provides an easy programming capability for both accelerator hardware developers and software developers. Based on such features, we extend the OpenStack to make the cloud users can remotely use the POWER8 machine with FPGA acceleration.

Our work allows the cloud users uploading their accelerator design to an automatically compiling service, and then their accelerators will be automatically deployed into a customized OpenStack cloud with POWER8 machine and FPGA card. When the cloud users launch some virtual machines (VMs) in this cloud, their accelerators can be attached to their VMs so that inside these VMs, they can use their accelerators for their applications. Like the operating system images in cloud, the accelerators can also be shared or sold in the whole cloud so that one users accelerator can benefit other users.

By enabling CAPI in the cloud, our work lowers the threshold of using FPGA acceleration, encourages people using accelerators for their application and sharing accelerators to all cloud users. The CAPI and FPGA acceleration ecosystem also benefits from this way. A public cloud with our work is in testing. It is used by some students in university. Remote accessing to the cloud is enabled, so that live demo can be shown when in the presentation.

### Bio

Fei Chen works for IBM China Research Lab in major of cloud and big data. He achieved his B.S. degree in Tsinghua University, China and got his Ph.D. degree in Institute of Computing Technology, Chinese Academy of Sciences in the year 2011. He worked on hardware design for many years, and now focuses on integrating heterogeneous computing resource into cloud. Organization: IBM China Research Lab (CRL)

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Chen-Fei_OPFS2015_IBM_031315_final.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Chen-Fei_OPFS2015_IBM_031315_final.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,38 +0,0 @@
---
title: "Genome Folding and POWER8: Accelerating Insight and Discovery in Medical Research"
date: "2015-11-16"
categories:
- "blogs"
tags:
- "gpu"
- "genomics"
- "healthcare"
---

_By Richard Talbot, Director - Big Data, Analytics and Cloud Infrastructure_

No doubt, the words “surgery” and “human genome” rarely appear in the same sentence. Yet thats what a team of research scientists in the Texas Medical Center announced recently --- a new procedure designed to modify how a human genome is arranged in the nucleus of a cell in three dimensions, with extraordinary precision. Picture folding a genome almost as easily as a piece of paper.

\[caption id="attachment\_2151" align="aligncenter" width="625"\][![An artists interpretation of chromatin folded up inside the nucleus. The artist has rendered an extraordinarly long contour into a small area, in two dimensions, by hand. Credit: Mary Ellen Scherl.](images/Artist_Interpretation_4_Credit_MaryEllenScherl-1021x1024.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/11/Artist_Interpretation_4_Credit_MaryEllenScherl.jpg) An artists interpretation of chromatin folded up inside the nucleus. The artist has rendered an extraordinarly long contour into a small area, in two dimensions, by hand. Credit: Mary Ellen Scherl.\[/caption\]

This achievement, which appeared recently in the [Proceedings of the National Academy of Sciences](http://www.pnas.org/), was driven by a team of researchers led by Erez Lieberman Aiden, a geneticist and computer scientist with appointments at the Baylor College of Medicine and Rice University in Houston, and his students Adrian Sanborn and Suhas Rao. The news spread quickly across a broad range of major news sites. Because genome folding is thought to be associated with many life-altering diseases, the implications are profound. Erez said, “This work demonstrates that it is possible to modify how a genome is folded by altering a handful of genetic letters, without disturbing the surrounding DNA.”

Lurking just beneath the surface, this announcement represents a major computational achievement also. Erez and his team have been using IBMs new POWER8 scale-out systems packed with NVIDIA Tesla K40 GPU accelerators to build a 3-D visualization of the human genome and model the reaction of the genome to this surgical procedure.

https://www.youtube.com/watch?v=Tn5qgEqWgW8

The total length of the human genome is over 3 billion base pairs (a typical measure of the size of a human or mammalian genome) and the data required to analyze a single persons genome can easily exceed a terabyte: enough to fill a stack of CDs that is 40 feet tall. Thus, the computational requirement behind this achievement is a grand challenge of its own.

POWER8 memory bandwidth and the high octane computational horsepower of the NVIDIA Tesla Accelerated Computing Platform enabled the team to run applications that arent feasible on industry standard systems. Aiden said that the discoveries were possible, in part, because these systems enabled his team to analyze far more 3-D folding data than they could before.

This high performance cluster of IBM POWER8 systems, codenamed “PowerOmics”, was installed at Rice University in 2014 and made available to Rice faculty, students and collaborative research programs in the Texas Medical Center. The name “PowerOmics” was selected to portray the Life Sciences research mission of this high performance compute and storage resource for the study of large-scale, data-rich life sciences --- such as genomics, proteomics and epigenomics. This high performance research computing infrastructure was made possible by a collaboration with OpenPOWER Foundation members Rice University, IBM, NVIDIA and Mellanox.

* * *

For more information:

- Baylor College of Medicine, Press Release October 19, 2015: [Team at Baylor successfully performs surgery on a human genome, changing how it is folded inside the cell nucleus](https://www.bcm.edu/news/molecular-and-human-genetics/changing-how-human-genome-folds-in-nucleus)
- Rice University, Press Release October 19, 2015: [Gene on-off switch works like backpack strap](http://news.rice.edu/2015/10/19/gene-on-off-switch-works-like-backpack-strap-2/)
- Time, Oct. 19, 2015: [Researcher Perform First Genome Surgery](http://time.com/4078582/surgery-human-genome/)

* * *

@ -1,47 +0,0 @@
---
title: "Delft University Analyzes Genomics with Apache Spark and OpenPOWER"
date: "2015-12-14"
categories:
- "blogs"
tags:
- "openpower"
- "power8"
- "genomics"
- "spark"
---

_By Zaid Al-Ars, Cofounder, Bluebee, Chair of the OpenPOWER Foundation Personalized Medicine Working Group, and Assistant Professor at Delft University of Technology_

The collaboration between the Computer Engineering Lab of the Delft University of Technology (TUDelft) and the IBM Austin Research Lab (ARL) started two years ago. Peter Hofstee invited me for a sabbatical visit to ARL to collaborate on big data challenges in the field of genomics and to investigate areas of common interest to work on together. The genomics field poses a number of challenges for high-performance computing systems and requires architectural optimizations to various subsystem components to effectively run the algorithms used in this field. Examples of such required architectural optimizations are:

- Optimizations to the I/O subsystem, due to the large data file sizes that need to be accessed repetitively
- Optimizations to the memory subsystem, due to the in-memory processing requirements of genomics applications
- Optimizations to the scalability of the algorithms to utilize the available processing capacity of a cluster infrastructure.

To address these requirements, we set out to implement such genomics algorithms using a scalable big data framework that is capable of performing in-memory computation on a high performance cluster with optimized I/O subsystem.

\[caption id="attachment\_2183" align="aligncenter" width="625"\][![Frank Liu and Zaid Al-Ars stand next to the ten-node POWER8 cluster running their tests](images/Delft-1-768x1024.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/Delft-1.jpg) Frank Liu and Zaid Al-Ars stand next to the ten-node POWER8 cluster running their tests\[/caption\]

## Sparking the Innov8 with POWER8 University Challenge

From this starting point, we had the idea of building a high-performance system for genomics applications and enter it in the [Innov8 with POWER8 University Challenge](http://www-03.ibm.com/systems/power/education/academic/university-challenge.html?cmp=IBMSocial&ct=C3970CMW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us). In the process, the TUDelft would bring together various OpenPOWER technologies developed by IBM, Xilinx, Bluebee and others to create a solution for a computational challenge that has a direct impact in healthcare for cancer diagnostics as well as a scientific impact on genomics research in general. We selected Apache Spark as our big data software stack of choice, due to its scalable in-memory computing capabilities, and the easy integration it offers to a number of big data storage systems and programming APIs. However, a lot of work was needed in order to realize this solution, both related to the practicalities of installing and running Apache Spark on Power systems, something which has not yet been done at the time, as well as building the big data framework for genomics applications.

The first breakthrough came a couple of months after my sabbatical, when Tom Hubregtsen (a TUDelft student back then, working on his MSc thesis within ARL) was able to setup and run an Apache Spark implementation on a POWER8 system, by modifying and rewriting a whole host of libraries and middleware components in the software stack. Tom worked hard to achieve this important feat as a stepping-stone to his actual work on integrating Flash-based storage into the Spark software stack. He focused on CAPI connected Flash, and modified Apache Spark to spill intermediate data directly to the Flash system. The results were very promising, showing up to 70% reduction in the overhead as a result of the direct data spilling.

Building on Toms work, Hamid Mushtaq (a researcher in the TUDelft) successfully ran Spark on a five-node IBM Power cluster owned by the TUDelft. Hamid then continued to create a Spark-based big data framework that enables segmentation of the large data volumes used in the analysis, and enables transparent distribution of the analysis on a scalable cluster. He also made use of the in-memory computation capabilities of Spark to enable dynamic load balancing across the cluster, depending on the processing requirements of the input files. This enables efficient utilization of the available computation resources in the compute cluster. Results show that we can reduce the compute time of well-known pipelines by more than an order of magnitude, reducing the execution time from hours to minutes. This implementation is now being ported by Frank Liu at ARL on a ten-node POWER8 cluster to check for further scalability and optimization potential.

\[caption id="attachment\_2184" align="aligncenter" width="625"\][![Left to right: Hamid Mushtaq, Sofia Danko and Daniel Molnar](images/Delft-2-1024x683.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/Delft-2.jpg) Left to right: Hamid Mushtaq, Sofia Danko and Daniel Molnar\[/caption\]

## FPGA Acceleration

Keeping in mind the high computational requirements of the various genomics algorithms used, as well as the available parallelism in these algorithms, we identified early on the benefits of using FPGA acceleration approaches to improve the performance even further. However, it is rather challenging to use hardware acceleration in combination with Spark, something that has not yet been shown to work on any system so far, mainly because of the difficulty of integrating FPGAs into the Java-based Spark software stack. Daniel Molnar (an internship student at the TUDelft) took up this challenge and within a short amount of time was able to write native functions that connect Spark through the Java Native Interface (JNI) to FPGA hardware accelerators for specific kernels. These kernels are now being integrated and evaluated for their system requirements and the speedup they can achieve.

## Improving Genomics Data Compression

Further improvements to the genomics scalable Spark pipeline are being investigated by Sofia Danko (a TUDelft PhD student), who is looking at the accuracy of the analysis on Power and proposing approaches to ensure high-quality output that can be used in a clinical environment. She is also investigating state-of-the-art genomics data compression techniques to facilitate low-cost storage and transport of DNA information. Initial results of her analysis show that specialized compression techniques can reduce the size of genomics input files to a fraction of the original size, achieving compression ratios as low as 16%.

We are excited to be part of the Innov8 university challenge. Innov8 helps the students to work as a team with shared objectives, and motivates them to achieve rather ambitious goals that have relevant societal impact they can be proud of. The team is still working to improve the results of the project, by increasing both the performance as well as the quality of the output. We are also looking forward to present our project in the IBM InterConnect 2016 conference, and to compete with other world-class universities participating in the Innov8 university challenge

* * *

[![zaid](images/zaid-150x150.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/12/zaid.jpg)_Zaid Al-Ars is cofounder of Bluebee, where he leads the development of the Bluebee genomics solutions. Zaid is also an assistant professor at the Computer Engineering Lab of Delft University of Technology, where he leads the research and education activities of the multi/many-core research theme of the lab. Zaid is involved in groundbreaking genomics research projects such as the optimized child cancer diagnostics pipeline with University Medical Center Utrecht and de novo DNA assembly research projects of novel organisms with Leiden University._

@ -1,24 +0,0 @@
---
title: "GNU Compiler Collection (GCC) for Linux on Power"
date: "2018-07-12"
categories:
- "blogs"
tags:
- "featured"
---

_[This article was originally published by IBM](https://developer.ibm.com/linuxonpower/2018/06/28/gnu-compiler-collection-gcc-linux-power/)_.

By [Bill Schmidt](https://developer.ibm.com/linuxonpower/author/wschmidt-2/)

The GNU Compiler Collection (GCC) is the standard set of compilers shipped with all Enterprise Linux distributions. IBMs Linux on Power Toolchain team supports GCC for Linux on Power, providing enablement and exploitation of new features for each processor generation, and improved code generation for better performance. GCC includes a C compiler (gcc), a C++ compiler (g++), a Fortran compiler (gfortran), a Go compiler (gccgo), and several others.

Because Linux distributors build all of their packages with the same GCC compilers that they ship, for stability reasons GCC is not updated to new versions over time on enterprise distributions. Thus it is often the case that the default GCC on a system is too old to support all features for the most modern processors. It is highly recommended that you use as recent a version of GCC as possible for compiling production quality code.

One way to obtain the most recent compilers (and libraries) is to install the [IBM Advance Toolchain](https://developer.ibm.com/linuxonpower/advance-toolchain/). A new version of the Advance Toolchain is released each August, based upon the most recent GCC compilers and core system libraries available. The Advance Toolchain is free to download, and is fully supported through IBMs Support Line for Linux Offerings. IBM often includes additional optimizations in the Advance Toolchain that were not completed in time for the base release.

If you are a do-it-yourselfer, you can also download the source for the most recent official GCC releases from the Free Software Foundations website. A list of releases, and a link to the mirror sites from which the code can be downloaded, can be found here: [https://gcc.gnu.org/releases.html](https://gcc.gnu.org/releases.html) Instructions for installing the software can be found here: [https://gcc.gnu.org/install/](https://gcc.gnu.org/install/) A sample configuration command for compilers that will generate POWER8 code is available from [GCC for Linux on Power user community](https://developer.ibm.com/linuxonpower/compilers-linux-power/gnu-compiler-collection-gcc/).

Advice for compiler options for the best performance may be found here: [https://developer.ibm.com/linuxonpower/compiler-options-table/](https://developer.ibm.com/linuxonpower/compiler-options-table/)

Welcome to the [GCC for Linux on Power user community](https://developer.ibm.com/linuxonpower/compilers-linux-power/gnu-compiler-collection-gcc/)!

@ -1,16 +0,0 @@
---
title: "Google, Rackspace, and GPUs: OH MY! See what you missed at OpenPOWER Summit"
date: "2016-04-11"
categories:
- "blogs"
tags:
- "featured"
---

What has over 50 new hardware reveals, collaboration from over 200 members like Google, Rackspace, IBM, and NVIDIA, and made headlines around the world? That's right: OpenPOWER Summit 2016!

Check out our Slideshare below to see some of the great content, quotes from industry leaders, and announcements that you missed.

<iframe style="border: 1px solid #CCC; border-width: 1px; margin-bottom: 5px; max-width: 100%;" src="//www.slideshare.net/slideshow/embed_code/key/naxqMLbjIdddrm" width="760" height="570" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe>

**[OpenPOWER Summit Day 2 Recap](//www.slideshare.net/OpenPOWERorg/openpower-summit-day-2-recap "OpenPOWER Summit Day 2 Recap")** from **[OpenPOWERorg](//www.slideshare.net/OpenPOWERorg)**

@ -1,8 +0,0 @@
---
title: "Google Shows Off Hardware Design Using IBM Chips"
date: "2014-04-28"
categories:
- "blogs"
---

Its no secret that IBM wants to move its technology into the kind of data centers that Google GOOGL \-0.47% and other Web giants operate. Now comes evidence that Google is putting some serious work into that possibility.

@ -1,61 +0,0 @@
---
title: "High Performance Secondary Analysis of Sequencing Data"
date: "2018-11-13"
categories:
- "blogs"
tags:
- "featured"
---

Genomic analysis is on the cusp of revolutionizing the understanding of diseases and the methods for their treatment and prevention. With the advancements in Next Generation Sequencing (NGS) technologies, the number of human genomes sequenced is predicted to double every year. This market growth is further fueled by the ongoing transition of NGS into the clinical market where it is enabling personalized medicine, that promises to transform the diagnosis and treatment of diseases, leading to a disruptive change in modern medicine.

However, current DNA analysis is restricted to using limited data due to the large time and cost for Whole Genome Sequencing (WGS). As biochemical sequencing is getting faster and cheaper, the bottleneck is the analysis of the large volumes of data generated by these technologies. Faster and cheaper computational processing is required to make genomic analysis available for the masses. Furthermore, pharmaceutical companies, consumer genomic companies, and research centers are currently processing hundreds of thousands of genomes with great cost and will hugely benefit from this improvement as well.

Parabricks brings high performance computing technologies that are tailored for NGS analyses and accelerates the standard NGS software from several days to approximately one hour. The accelerated software is a drop-in replacement of existing tools that does not sacrifice output accuracy or configurability. Parabricks provides 30-36 times faster secondary analysis of FASTQ files coming out of sequencer to variant call files (VCFs) for tertiary analysis on Power 9 servers. The standard pipeline shown below consists of three steps and are defined as the Genome Analysis Toolkit (GATK). Parabricks accelerates existing GATK 4 best practices to generate equivalent results as the baseline. The image below shows the pipeline currently supported by Parabricks.

\[caption id="attachment\_5912" align="aligncenter" width="757"\][![](images/Parabricks.png)](http://opf.tjn.chef2.causewaynow.com/wp-content/uploads/2018/11/Parabricks.png) Figure 1 - Parabricks GPU accelerated pipeline\[/caption\]

## **Power Hardware Configuration**

The Power System AC922 server is co-designed with OpenPOWER Foundation ecosystem members for the demanding needs of deep learning and AI, high-performance analytics, and high-performance computing users. It is deployed in the most powerful supercomputers on the planet through a partnership between IBM, NVIDIA, and Mellanox, among others.

The IBM AC922 Server is an accelerator optimized server with support for four NVIDIA Tesla V100 GPUs connected via NVLINK 2.0 to the POWER9 CPUs at 150GBs speed each GPU. The hardware and system software configurations are summarized below.

<table width="621"><tbody><tr><td width="104">Server</td><td width="517"><h1>IBM AC922 (8335-GTH)</h1></td></tr><tr><td width="104">Processor</td><td width="517">40-core at 2.4 GHz (3.0 GHz turbo) IBM POWER9 NVLink 2.0 technology,<div></div>4x SMT</td></tr><tr><td width="104">Memory</td><td width="517">·&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 512 GB DDR4 (8 Channels) - supporting up to 2 TB of memory</td></tr><tr><td width="104">GPU</td><td width="517">4x NVIDIA V100-16GB HBM2, SMX2</td></tr></tbody></table>

_Table 1 - Hardware configuration_

## **Performance Evaluation**

Secondary analysis of genomic data on CPUs has been known to take a long time. 30x WGS data can take upto 30-40 hours for running the pipeline shown before using HaplotypeCaller for variant calling. Below, the raw run times in minutes for the Parabricks software on a Power9 server for 3 DNA samples with different coverages including NA12878.

<table width="626"><tbody><tr><td width="89">Benchmark</td><td width="68">Coverage</td><td width="74"><strong>CPU only</strong><div></div><strong>(minutes)</strong></td><td width="64">BWA-Mem</td><td width="61">Others*</td><td width="109">HaplotypeCaller</td><td width="80"><strong>Total Time</strong><div></div><strong>(minutes)</strong></td><td width="81"><strong>Speedup</strong></td></tr><tr><td width="89">S2</td><td width="68">25x</td><td width="74"><strong>2,746</strong></td><td width="64">56.8</td><td width="61">14.65</td><td width="109">13.2</td><td width="80"><strong>84.5</strong></td><td width="81"><strong>32.4</strong></td></tr><tr><td width="89">NA12878</td><td width="68">43x</td><td width="74"><strong>3125</strong></td><td width="64">62.7</td><td width="61">14.1</td><td width="109">11.5</td><td width="80"><strong>88.3</strong></td><td width="81"><strong>35.39</strong></td></tr><tr><td width="89">NIST 12878</td><td width="68">41x</td><td width="74"><strong>2993</strong></td><td width="64">61.05</td><td width="61">14.95</td><td width="109">13.71</td><td width="80"><strong>89.71</strong></td><td width="81"><strong>33.96</strong></td></tr></tbody></table>

_Table 2 - Others include Co-ordinate sorting, marking duplicates, bqsr and applybqsr._

## **Accuracy Evaluation**

The accuracy of Parabricks solution compared to GATK4 solution is done at two steps:

i) BAM after Marking Duplicates

ii) VCF after calling variants

Parabricks generates 100% equivalent BAM as compared to the CPU only solution and has over 99.99% concordance with CPU vcf.

<table width="626"><tbody><tr><td width="89">Benchmark</td><td width="68">Coverage</td><td width="65"><strong>BAM</strong></td><td width="73"><strong>VCF</strong></td></tr><tr><td width="89">S2</td><td width="68">25x</td><td width="65"><strong>100%</strong></td><td width="73"><strong>99.998%</strong></td></tr><tr><td width="89">NA12878</td><td width="68">43x</td><td width="65"><strong>100%</strong></td><td width="73"><strong>99.996%</strong></td></tr><tr><td width="89">NIST 12878</td><td width="68">41x</td><td width="65"><strong>100%</strong></td><td width="73"><strong>99.996%</strong></td></tr></tbody></table>

_Table 3_

## **Features of Parabricks software**

- **30-35 times faster analysis:** Compared to a CPU-only solution, Parabricks accelerates secondary analysis by orders of magnitude.
- **100% Deterministic and Reproducible**: Parabricks software regardless of platform and number/type of resources generates the exact same results every execution.
- **Equivalent Results**: Parabricks pipeline generates equivalent results as the reference Broad Institute GATK 4 best practices pipeline as the same algorithm is used.
- **Up to Date Support of All Tool Versions**: Parabricks accelerated software supports multiple versions of BWA-Mem, Picard and GATK and will support all future versions of these tools.
- **Visualization**: Parabricks generates several key visualizations real-time, while performing secondary analysis that can improve the users understanding of the data.
- **Single Node Execution**: The entire pipeline is run using one computing node and does not incur any overhead of distributing data and work across multiple servers.
- **Turnkey Solution**: Parabricks software runs on standard CPU and GPU nodes available on the cloud or on-premise, and requires no additional setup steps by the user.
- **On-Premise and Cloud:** Parabricks software can run on local servers, AWS, Google Cloud, and Azure.

Please contact [info@parabricks.com](mailto:info@parabricks.com) for further inquiries.

@ -1,54 +0,0 @@
---
title: "How My Daughter Trained an Artificial Intelligence Model"
date: "2019-12-11"
categories:
- "blogs"
tags:
- "ibm"
- "nvidia"
- "artificial-intelligence"
- "ai"
- "power9"
- "ibm-power-systems"
- "powerai"
- "david-spurway"
- "oxford-cancer-biomarkers"
---

_\*This article was originally published by David Spurway on LinkedIn.\*_

David Spurway, IBM Power Systems CTO, UK & Ireland, IBM

**OpenPOWER Foundation and PowerAI make AI accessible to all**

AI is the most buzz-worthy technology today, with [applications ranging](https://www.techworld.com/picture-gallery/tech-innovation/weirdest-uses-of-ai-strange-uses-of-ai-3677707/) from creating TV news anchors to creating new perfumes. At IBM, we have been focused on this topic for a long time. In 1959, we [demonstrated a computer winning at checkers](https://www.ibm.com/ibm/history/ibm100/us/en/icons/ibm700series/impacts/), which was a milestone in AI. The company then built [Deep Blue](https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/) in 1997, a machine that beat the world chess champion. More recently, IBM released [Watson](https://www.ibm.com/watson) - you may have heard of it playing [Jeopardy](https://www.youtube.com/watch?v=P18EdAKuC1U) or [powering The Weather App](https://www.ibm.com/watson-advertising/news/introducing-the-new-weather-channel-app). IBM continues to push the boundaries of AI with [Project Debater](https://www.research.ibm.com/artificial-intelligence/project-debater/), which is the first AI system that can debate with humans on complex topics.

In fact, after seeing the Watson Grand Challenge in 2011, Google expressed interest in using POWER for their own projects, and [the OpenPOWER Foundation was born](https://www-03.ibm.com/press/us/en/pressrelease/41684.wss). [The Foundation](https://openpowerfoundation.org/) is built around principles of partnership and collaboration, and enables individuals and companies alike to leverage POWER technology for their applications.

One of our key goals at IBM is to lower the bar of entry to deploying AI. And as the CTO of IBM Power Systems for the UK and Ireland, Ive witnessed the impact that POWER can have on ecosystems. A few years ago, I decided to try to deploy an AI application on POWER myself. I took inspiration from an OpenPOWER Foundation blog post, [Deep Learning Goes to the Dogs](https://openpowerfoundation.org/deep-learning-goes-to-the-dogs/), and decided to recreate their model to classify different dog breeds on my own IBM Power Systems server.

I began by using the Stanford Dogs data set, which contains images of 120 breeds of dogs from around the world, and IBM Watson Machine Learning Community Edition (IBM WML CE, formerly known as PowerAI). IBM WML CE was created to simplify the aggregation of hundreds of open source packages necessary to build a single deep learning framework. I used it to make my dog classification work.

The only problem was that it didnt work in **all cases**. While my model was good enough to identify dogs in photos that I took of my children at Crufts, it kept tripping up on classifying dachshunds, a favourite of my daughter :

![The model didnt know how to classify dachshunds, before Elizabeth fixed it!](https://media.licdn.com/dms/image/C5612AQGzuDG7BBC4bA/article-inline_image-shrink_1000_1488/0?e=1580947200&v=beta&t=V6S5ToENlBXAG9ptru4masv27EHdOIKPslIFd_x3HXU)

The problem here is that the dachshund was not included in the original 120-breed data set. My model didnt know what a dachshund was. In order for it to recognise a dachshund, I needed to upload and label dozens of photos of dachshunds, usually in a specific format, which is a lot of work.

Enter my daughter Elizabeth.

Elizabeth is a big fan of dogs, and was happy to lend her expertise for the benefit of my project.

PowerAI Vision makes it easy for someone like my daughter, a subject matter expert, to come in and do this work, instead of requiring it be done by a data scientist. Its the key to democratising artificial intelligence.

My daughter channelled her passion for and knowledge of dogs and whipped my model into shape in no time.

![After my daughter trained the model to recognize dachshunds, using PowerAI Vision.](https://media.licdn.com/dms/image/C5612AQHncaW380qn5g/article-inline_image-shrink_1000_1488/0?e=1580947200&v=beta&t=j96X3kOiprgqaKhWFPia7IIgQYJ3vunxHa251roE7W8)

“Okay, David,” you might be thinking. “Dogs are a fun topic, but lets get serious. Why is classifying dachshunds so important to you?”

Well, the truth is that through the OpenPOWER Foundation and tools like PowerAI, artificial intelligence models can be built for any number of applications.

In fact, this **exact same** technology is being used in the UK to detect cancers. Predicting which patients with stage II colorectal cancer will suffer a recurrence after surgery is difficult. However, many are routinely prescribed chemotherapy, even though it may cause severe side effects. In some patients these can be fatal. [Oxford Cancer Biomarkers](https://oxfordbio.com/) (OCB) was established in 2012 to discover and develop biomarkers (a quantifiable biological parameter that provides insight into a patients clinical state) to advance personalized medicine within oncology, focusing on colorectal cancer and its treatments. On a personal note, my father was successfully treated for this cancer. OCB [partnered](https://meridianit.co.uk/ocb-case-study/) with IBM and the IBM Business Partner Meridian to apply PowerAI Vision (using Power Systems AC922 servers, which pair POWER9 CPUs and NVIDIA Tesla V100 with NVLink GPUs) to identify novel diagnostic biomarkers in tumor microenvironments, with the potential to enhance early diagnosis and treatment decisions.

My daughter can use her expertise to help classify dog breeds - and now theres no limit to how you can use your own expertise to make the world a better place.

@ -1,18 +0,0 @@
---
title: "How the IBM-GLOBALFOUNDRIES Agreement Supports OpenPOWER's Efforts"
date: "2014-10-22"
categories:
- "blogs"
---

By Brad McCredie, President of OpenPOWER and IBM Fellow and Vice President of Power Development

On Monday IBM and GLOBALFOUNDRIES announced that they had signed a Definitive Agreement under which GLOBALFOUNDRIES plans to acquire IBM's global commercial semiconductor technology business, including intellectual property, world-class technologists and technologies related to IBM Microelectronics, subject to completion of applicable regulatory reviews. From my perspective as both OpenPOWER Foundation President and IBM's Vice President of Power Development, I'd like to share my thoughts with the extended OpenPOWER community on how this Agreement supports our collective efforts.

This Agreement, once closed, will enhance the growing OpenPOWER ecosystem consisting of both IBM and non-IBM branded POWER-based offerings. While of course our OpenPOWER partners retain an open choice of semiconductor manufacturing partners, IBM's manufacturing base for our products will be built on a much larger capacity fab that should advantage potential customers.

IBM's sharpened focus on fundamental semiconductor research, advanced design and development will lead to increased innovation that will benefit all OpenPOWER Foundation members. IBM will extend its global semiconductor research and design to advance differentiated systems leadership and innovation for a wide range of products including POWER based OpenPOWER offerings from our members. IBM continues its previously announced $3 billion investment over five years for semiconductor technology research to lead in the next generation of computing.

IBM remains committed to an extension of the open ecosystem using the POWER architecture; this Agreement does not alter IBM's commitment to the OpenPOWER Foundation. This announcement is consistent with the goals of the OpenPOWER Foundation to enable systems developers to create more powerful, scalable and energy-efficient technology for next-generation data centers. The full stack -- beginning at the chip and moving all the way to middleware software -- will drive systems value in the future. IBM and the members of the OpenPOWER Foundation will continue to lead the challenge to extend the promise that Moores Law could not fulfill, offering end-to-end systems innovation through our robust collaboration model.

Today's Agreement reaffirms IBM's commitment to move towards world-class systems -- both those offered by IBM and those built by our OpenPOWER partners that leverage POWER's open architecture -- that can handle the demands of new workloads and the unprecedented amount of data being generated. I look forward to our continued work together, as IBM extends its semiconductor research and design capabilities for open innovation for cloud, mobile, big data analytics, and secure transaction-optimized systems.

@ -1,36 +0,0 @@
---
title: "How Ubuntu is enabling OpenPOWER and innovation Randall Ross (Canonical)"
date: "2015-01-16"
categories:
- "blogs"
---

### Objective

Geared towards a business audience that has some understanding of POWER and cloud technology, and would like to gain a better understanding of how their combination can provide advantages for tough business challenges.

### Abstract

Learn how Canonical's Ubuntu is enabling OpenPOWER solutions and cloud-computing velocity. Ubuntu is powering the majority of cloud deployments. Offerings such as Ubuntu Server, Metal-as-a-service (MAAS), hardware provisioning, orchestration (Juju, Charms, and Charm Bundles), workload provisioning, and OpenStack installation technologies simplify managing and deploying OpenPOWER based solutions in OpenStack, public, private and hybrid clouds. OpenPOWER based systems are designed for scale-out and scale-up cloud and analytics workloads and are poised to become the go-to solution for the worlds (and your businesses) toughest problems.

This talk will focus on the key areas of OpenPOWER based solutions, including

- Strategic POWER8 workloads
- Solution Stacks that benefit immediately from OpenPOWER
- CAPI (Flash, GPU, FPGA and acceleration in general)
- Service Orchestration
- Ubuntu, the OS that fully supports POWER8
- Large Developer Community and mature development processes
- Ubuntus and OpenPOWERs Low-to-no barrier to entry

### Speaker names / Titles

Randall Ross (Canonicals Ubuntu Community Manager, for OpenPOWER & POWER8) Jeffrey D. Brown (IBM Distinguished Engineer,  Chair of the OpenPOWER Foundation Technical Steering Committee) _\- proposed co-presenter, to be confirmed_

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Randall-Ross_OPFS2015_Canonical_031715.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Randall-Ross_OPFS2015_Canonical_031715.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,46 +0,0 @@
---
title: "HPC solution stack on OpenPOWER"
date: "2015-01-19"
categories:
- "blogs"
---

### Introduction to Authors

Bin Xu: Male, IBM STG China, advisory software engineer, PCM architect, mainly focus on High Performance Computing and Software Define environment.

Jing Li: Male, IBM STG China, development manager for PCM/PHPC.

### Background

OpenPOWER will be one of major platforms used in all kinds of industry area, especially in High Performance Computing (HPC). IBM Platform Cluster Manager (PCM) is the most popular cluster management software aiming to simplify the system and workload management in data center.

### Challenges

As a brand new platform based on IBM POWER technology, the customer is asking if their end to end applications even total solutions can be well running on OpenPOWER.

Our experience: This demo will show the capability of IBM OpenPOWER that can be the foundation of the complicated High Performance Computing complete solution. From the HPC cluster deployment, job scheduling, system management, application management to the science computing workloads on top of them, all these components can be well constructed on top of IBM OpenPOWER platform with good usability and performance. Also this demo shows the simplicity of migrating a complete x86 based HPC stack to the OpenPOWER platform.  In this demo, the Platform Cluster Manager (PCM) and xCat will serve as the deployment and management facilitators of the solution, the Platform HPC will be the total solution integrated with Platform LSF (Load Sharing Facility), Platform MPI, and other HPC related middleware, and two of the popular HPC applications will be demonstrated on this stack.

[![Abstractimage1](images/Abstractimage1-300x269.jpg)](https://openpowerfoundation.org/wp-content/uploads/2015/01/Abstractimage1.jpg)

There are three steps in above:

- Admin installs the head node.
- Admin discovery other nodes and provision them to join the HPC cluster automatically.
- User runs their HPC application and monitoring the cluster on dashboard.

### Benefit

Faster and easy to deploy the HPC cluster environment based on OpenPOWER technology, and provide the system management, workload management with great usability for OpenPOWER HPC.

### Next Steps and Recommendations

Integration with other application in OpenPOWER environment

### Presentation

<iframe src="https://openpowerfoundation.org/wp-content/uploads/2015/03/Jing-Li_OPFS2015_IBM_S5700-HPC-Solution-Stack-on-OpenPOWER.pdf" width="100%" height="450" frameborder="0"></iframe>

[Download Presentation](https://openpowerfoundation.org/wp-content/uploads/2015/03/Jing-Li_OPFS2015_IBM_S5700-HPC-Solution-Stack-on-OpenPOWER.pdf)

[Back to Summit Details](javascript:history.back())

@ -1,68 +0,0 @@
---
title: "IBM Announces New Open Source Contributions at OpenPOWER Summit Europe 2019"
date: "2020-01-22"
categories:
- "blogs"
tags:
- "openpower"
- "ibm"
- "openpower-foundation"
- "opencapi"
- "power-isa"
- "oc-accel"
- "capi-flashgt"
- "open-source"
---

By: Mendy Furmanek, Director, OpenPOWER Processor Enablement, IBM and President, OpenPOWER Foundation

2019 was an important year for the OpenPOWER Foundation - especially the second half of the year. In the course of a few months, our ecosystem became even more open and the POWER architecture became more accessible to all.

In August, IBM made a major announcement at OpenPOWER Summit North America by [open-sourcing the POWER ISA](https://openpowerfoundation.org/the-next-step-in-the-openpower-foundation-journey/) as well as numerous key hardware reference designs. With these announcements, IBM became the only architecture with a stack that is entirely open system - from the foundation of the processor ISA through the software stack.

![IBM has a completely open system, from the processor ISA to the software stack.](images/IBM-1.png)

With exploding amounts of data involved in modern workloads, we believe that open source hardware and an innovative ecosystem is key for the industry. So to lead the industry forward in that direction, weve continued to make additional contributions to the open source community.

Then, I announced two new contributions at OpenPOWER Summit Europe in October, both dealing with CAPI FlashGT and OpenCAPI technology.

**CAPI FlashGT - Accelerated NVMe Controller FPGA IP**

![CAPI FlashGT](images/IBM-2.png)CAPI Flash has already been available, but our open-sourcing of the FlashGT component makes the entire CAPI Flash stack completely open.

Each time an application runs a system call to the operating system, it adds latency - time and overhead in the kernel stack. FlashGT takes a portion of that process and moves it from software to hardware, so much of the kernel instructions and interface is not needed in the software stack. The end result is a faster and more efficient process - lower latency, higher bandwidth.

With a reduction of instructions running on the CPU / core, there can be a dramatic increase in CPU offload. Initial performance testing shows significant improvements:

- 6x 4k random read IOPs per core
- 2.5x 4k random write IOPs per core

More information on [CAPI FlashGT can be found here.](https://github.com/open-power/capi2-flashgt)

**OpenCAPI Acceleration Framework (OC-Accel)**

OC-Accel is the Integrated Development Environment (IDE) for creating application FPGA-based accelerators. Put simply, it enables virtual memory sharing among processors and OpenCAPI devices.

![OpenCAPI Acceleration Framework (OC-Accel)](images/IBM-3.png)

Numerous layers of logic are needed to create an OpenCAPI device, including physical, data link and transportation layers. These have been available previously. But our open-sourcing of the OC-Accel bridge makes everything needed for an OpenCAPI device available today.

![OpenCAPI Acceleration Framework (OC-Accel)](images/IBM-4.png)

OC-Accel includes:

- Hardware logic to hide the details of TLX protocol
- Software libraries for application code to communicate with
- Scripts and strategies to construct an FPGA project
- Simulation environment
- Workflow for coding, debugging, implementation and deployment
- High level synthesis support
- Examples and documents to get started

More information on [OC-Accel can be found here](https://github.com/OpenCAPI/oc-accel).

Now in 2020, we are still at the beginning of our open source journey. When we look at the world today, we know that the only way for the industry to succeed is through open collaboration - a rising tide lifts all boats, as the saying goes. Were proud to be part of the movement that is enabling the ecosystem to innovate more quickly with our IP and making great strides in computing. Thank you for being a part of the movement with us!

Please view my full session from OpenPOWER Summit Europe 2019 below.

<iframe src="https://www.youtube.com/embed/ufBtrGJVF6g" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe>

@ -1,10 +0,0 @@
---
title: "IBM hopes its enhanced Power8 chip will take on Intels x86"
date: "2014-06-27"
categories:
- "press-releases"
- "industry-coverage"
- "blogs"
---

BANGALORE, JUNE 27: IBM will use its huge India software developer base to work on its new Power8 chips to challenge Intels dominance.The worlds largest software company has launched its new Power8 chip architecture, an enhancement over its earlier version, to take on Intels Xeon chips or x86 widely used in data centres and server computers worldwide.

@ -1,8 +0,0 @@
---
title: "IBM is changing the server game"
date: "2014-04-30"
categories:
- "blogs"
---

There was something I missed on the IBM strategy when they sold x86 branch to Lenovo. Since I read some articles about OpenPower and Google home made first power8 server, this strategy is making more sense.

@ -1,16 +0,0 @@
---
title: "IBM Portal for OpenPOWER launched for POWER series documentation, system tools and development collaboration"
date: "2017-03-30"
categories:
- "blogs"
---

_By Andy Pearcy-Blowers, OpenPOWER Applications Engineer and IBM Portal for OpenPOWER Co-Lead & Luis Armenta, Sr. SI Engineer, Project Manager and IBM Portal for OpenPOWER Lead_

This week, OpenPOWER member IBM launched its new website, the "[IBM Portal for OpenPOWER](https://www.ibm.com/systems/power/openpower)". The IBM Portal for OpenPOWER was developed to provide a central location for documentation on Power Systems servers. The IBM Portal for OpenPOWER gives users the ability to quickly find material of interest, including but not limited to: Users Manuals, Datasheets, Reference Design documentation, Firmware Training, and more, to foster innovation in developing around POWER.

This new portal replaces IBM Customer Connect's OpenPOWER Connect space that OpenPOWER Members and other OpenPOWER interested parties may have used in the past.

Throughout 2017 additional functionality and applications will be deployed to the IBM Portal for OpenPOWER. Examples of functionality improvements include enhancements to the: search function, social tools, documentation repository and subscription tools.  Examples of application implementations include a new Collaboration Center, System Tools, Issues Management and more.   The Collaboration Center will provide OpenPOWER partners, during development with IBM, the ability to: securely share files, screen share, track milestones and more.  The System Tools application will provide entitled OpenPOWER partners the ability to: download tools like HTX, Cronus, HSSCDR & more to use while developing and verifying their system design.  The Issues Management application will allow any user the ability to submit questions, issues and requests for support to IBM.

To visit the site and start developing around POWER go to: [www.ibm.com/systems/power/openpower](https://www.ibm.com/systems/power/openpower).

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save