<para>Snap is also a public Github repository. Create a "fork" (Click the "fork" button) on <linkxlink:href="https://github.com/open-power/snap">https://github.com/open-power/snap</link>. Keep working on your own snap fork, when it works, submit a pull request to "open-power/snap" and require merging into the public upstream.</para>
<para>capi2-bsp is a submodule of snap. You can find it in ".gitmodules" file (this is a hidden file). Please point it to your own capi2-bsp fork. Then </para>
<screen>git submodule init
git submodule update</screen>
<para>Anyway, make sure that "hardware/capi2-bsp" is what you have just generated in last chapter.</para>
</note>
</section>
<section><title>SNAP structure</title>
<para>On the FPGA side, there are three parts that need to consider when moving to a new FPGA card. They are (a) BSP, (b) snap_core, (c) DDR memory controller (mig). And there are also some components in SNAP need to be updated for a new FPGA card. </para>
<para>All of the user-developed accelerators are in "<emphasisrole="bold">actions</emphasis>" directory. There are already some examples there. Each "action" has its "sw", "hw", "tests", and other sub-directories. The hardware part uses "action_wrapper" as its top.</para>
<para>Then back to ${SNAP_ROOT}, "<emphasisrole="bold">software</emphasis>" directory includes libsnap, header files and some tools. "<emphasisrole="bold">hardware</emphasis>" directory is the main focus. <emphasisrole="bold">deconfig</emphasis> has the config files for silent testing purpose, and <emphasisrole="bold">scripts</emphasis> has the menu settings and other scripts. </para>
<para>
How does SNAP work and what are the files used in each step?
</para>
<itemizedlist>
<listitem><para><emphasisrole="bold">make snap_config</emphasis>: The menu to select cards and other options is controlled by "script/Kconfig"</para></listitem>
<listitem>
<para><emphasisrole="bold">make model</emphasis>: This step creates a Vivado project. It firstly calls "hardware/setup/<emphasisrole="bold">create_snap_ip.tcl</emphasis>" to generate the IP files in use, then calls "hardware/setup/<emphasisrole="bold">create_framework.tcl</emphasis>" to build the project. About create_framework.tcl: </para>
<itemizedlist>
<listitem>
<para>It adds BSP (board support package). In CAPI1.0, it is also called PSL Checkpoint file (b_route_design.dcp) or base_image. It uses the path pointed to b_route_design.dcp and adds it into the design. In CAPI2.0, it will call the make process in capi2-bsp submodule to generate "capi_bsp_wrap" if it doesn't exist. If you have already successfully generated it, this step is skipped. Then "create_framework.tcl" adds the capi_bsp_wrap (xcix or xci file) into the design.</para>
</listitem>
<listitem>
<para>It adds FPGA top files and snap_core files (in hardware/hdl/core).</para>
</listitem>
<listitem>
<para>It adds constrain files: in hardware/setup/${FPGACARD} or in hardware/capi2-bsp/${FPGACARD}</para>
</listitem>
<listitem>
<para>It adds user files (in actions/${ACTION_NAME}/hw). User's action hardware uses top file named "action_wrapper.vhd"</para>
</listitem>
<listitem>
<para>It adds simulation files (in hardware/sim/core) including simulation top files and simulation models. (If "no_sim" is selected in snap_config menu, this step is skipped.)</para>
</listitem>
</itemizedlist>
<para>After above steps, "<emphasisrole="bold">viv_project</emphasis>" is created. You can open it with Vivado GUI, and check the design hierarchy. And it will call the selected simulator to compile the simulation model.</para>
</listitem>
<listitem>
<para><emphasisrole="bold">make image</emphasis>: This step runs synthesis, implementation and bitstream generation. It calls "hardware/setup/<emphasisrole="bold">snap_build.tcl</emphasis>" and also uses some related tcl scripts to work on "viv_project". In this step, "hardware/build" will be created and the output products like bit images, checkpoints (middle products for debugging) and reports (reports of timing, clock, IO, utilization, etc.) If everything runs well and timing passes, user will get the bitstream files (in "<emphasisrole="bold">build</emphasis>/Images" sub directory) to program the FPGA card. </para>
</listitem>
</itemizedlist>
</section>
<section><title>Modifications to snap git repositories</title>
<para>For a new FPGA card, the detailed items to update are listed as below.</para>
<listitem><para>Software and tools</para></listitem>
<listitem><para>Testing</para></listitem>
<listitem><para>Publishing</para></listitem>
</itemizedlist>
<para>The best way is to grep some keywords like "S241" or "AD8K5" under the directories and look for the locations that need modifications.</para>
<note>
<para>If you meet files ending with "_source", like "psl_fpga.vhd_source", that means this file will be pre-processed to generate the output file without "_source" suffix, like "psl_fpga.vhd". There are <userinput>#ifdef</userinput> macros or comments like <userinput>-- only for NVME_USED=TRUE</userinput>. They help to create a target VHDL/Verilog file with different configurations.</para>
</note>
<para>Below lists the files to change. There may be some differences with new commits in SNAP git repository. Keep in mind they include: </para>
<itemizedlist>
<listitem><para>snap_config and environmental files</para></listitem>
<listitem><para>Hardware: psl_accel and psl_fpga (top) RTL files</para></listitem>
<listitem><para>Hardware: tcl files for the workflow</para></listitem>
<listitem><para>Hardware: Board: xdc files for IO/floorplan/clock/bitstream</para></listitem>
<listitem><para>Hardware: DDR: create DDR Memory controller IP (mig) in create_snap_ip.tcl, create DDR memory sim model, and other xdc files</para></listitem>
<listitem><para>Hardware: Other IP: create_ip, sim model, xdc files</para></listitem>
<listitem><para>Software: New card type, register definition</para></listitem>
<row><entry><para>hardware/sim/ddr4_dimm_???.sv</para></entry><entry><para>DDR memory model for simulation. Please get the information about how many DDR chips are connected together, the density and data width of each chip, and whether there is one chip is used for ECC (redundant). You can take an existing one as a template and modify.</para></entry></row>
<row><entry><para>hardware/sim/top_capi?0.sv_source</para></entry><entry><para>Instantiate the DDR memory model</para></entry></row>
<row><entry><para>hardware/snap_check_psl (Only for CAPI1.0)</para></entry><entry><para>declare card name</para></entry></row>
<row><entry><para>README.md</para></entry><entry><para>Announce a new card is supported </para></entry></row>
</tbody>
</tgroup>
</table>
</section>
<section><title>Update capi-utils</title>
<para>capi-utils is the third git repository that needs a few modifications. Same as before, fork it, make the modifications and submit a pull request.</para>
<para>The first column is the SUBSYSTEM_ID, the second column is the Card name, the third is the FPGA Chip Vendor, then it is the User Image starting address on the flash. For SPI device, size of block is 64Bytes. "SPIx4" is the flash interface type. It may also be "DPIx16" or "SPIx8". </para>
<para>"SPIx8" uses two bitstreams so another starting address also needs to be provided. And when you call "capi-flash-script" to program the flash, it needs two input bitstream files (primary and secondary).</para>
</section>
<section><title>Strategy to enable a new card</title>
<para>
To enable a new card on SNAP, complete following tasks one by one.
<listitem><para>Generate capi_bsp_wrap in capi2-bsp.</para></listitem>
<listitem><para>Make modifications to snap git repository as described above.</para></listitem>
<listitem><para>Select an action example without DDR, for example: hls_helloworld. </para></listitem>
<listitem><para>Go through the "make model" and "make image" processes and build the bitstream files. </para></listitem>
<listitem><para>Plug the card onto Power9 server and connect a JTAG/USB cable to a laptop. Install Vivado Lab on this laptop (it requires Windows or Linux operating system). Start Vivado Lab tool, open Hardware manager.</para></listitem>
<listitem><para>Power on the server. You will see the FPGA target is recognized by Vivado Lab tool.</para></listitem>
<listitem><para>Program the generated bitstream files (bin or mcs) to the card. On Vivado Lab tool, select the FPGA chip and right-click, choose "Add Configuration Memory Device..." and program the bin/mcs files to the flash. See in picture <xreflinkend="vivado-lab"/> and <xreflinkend="jtag"/></para></listitem>
<listitem><para>Wait it done (It may take 10 minutes). Unplug the JTAG/USB cable, reboot the server.</para></listitem>
<listitem><para>After the server is booted, log into OS, run <userinput>lspci</userinput> to see if the card is there. (Usually with Device ID 0x0477). Then download snap, capi-utils, libcxl (from github). Go to snap directory, <userinput>make apps</userinput> and run the application. </para></listitem>
</orderedlist>
<note><para>There is another way to replace step 6 to 8. We call it <emphasisrole="bold">"Fast program bit-file when power on"</emphasis>. Prepare the <emphasisrole="bold">bit</emphasis> file on laptop in advance. Not like bin/mcs files which are for the flash, the bit file is used to program the FPGA chip directly. When the server is powered on, after Vivado Lad sees the FPGA, right click the device, <userinput>program device ...</userinput> and select the bit file <emphasisrole="bold">immediately</emphasis>. This action only takes about 10 seconds and can be done before hostboot on the server starts to scan PCIe devices.</para>
<para>You should be aware of the fact that because only FPGA chip is programmed, (the flash memory is empty), when the server is powered off or reboot, FPGA doesn't have electricity so the programming in FPGA chip will be lost.</para>
<para>When you download and install <emphasisrole="bold">Vivado Lab</emphasis>, please pick up as same version as the Vivado (SDx) that you are using to build images. </para>
<para><emphasisrole="bold">Tips to help you debug:</emphasis></para>
<orderedlist>
<listitem><para>Seeing 0477 by "lspci" is the most important milestone. If not, please check file "<emphasisrole="bold">/sys/firmware/opal/msglog</emphasis>" to see whether there are link training failed messages. A successful message looks like this, which means this PCIe device has been scanned and recognized. The number followed "PHB#" is the PCIe device identifier in the format of "domain:bus:slot.func". You can see it by "lspci" also.:</para>
<listitem><para> Check <emphasisrole="bold">dmesg</emphasis>. Run "dmesg > dmesg.log" and search "cxl" in dmesg.log file. A normal output should be look like this</para>
<screen>[ 9.301403] cxl-pci 0000:01:00.0: Device uses a PSL9
<para>If your PCIe device has been recognized as CAPI, do "ls /dev/cxl" and you can see "afu*" devices. Then your application software can open the device like operating an ordinary file.</para>
<para>Please change the PCIe device identifier (0000:00:00.1) accordingly. Make sure you have seen the VSEC is properly linked. If not, go back to check your VSEC address in capi_vsec.vhdl in last chapter.</para>
<para>Use <linkxlink:href="https://github.com/ibm-capi/capi-utils">capi-utils</link> to program the bitstream files. If it succeeds, it proves that the Flash interface has been configured correctly. After this step, you can get rid of JTAG connector and use "capi-flash-script" to program the FPGA bitstreams. </para>
<para>The mechanic behind "capi-flash-script" is: </para>
<para>There is a flash controller on FPGA (in capi_bsp_wrap), and it connects to PCIe config space. The flash controller exposes four VSEC registers to allow host system to control. They are "Flash Address Register", "Flash Size Register", "Flash Status/Control Register" and "Flash Data Port". See in <linkxlink:href="http://openpowerfoundation.org/wp-content/uploads/resources/hwarch-caia-spec/content/ch_preface.html">Coherent Accelerator Interface Architecture</link>, Chapter 12.3, "CAIA Vendor-Specific Extended Capability Structure". So capi-utils src C file reads FPGA bitstream "bin" file, and writes the bytes to VSEC "Flash Data Port" register. So the bytes are sent to PCIe, through Flash controller and finally arrive to flash memory on the card.</para>
<listitem><para>Select another action example (hdl_example with DDR) or hls_memcopy.</para></listitem>
<listitem><para>"make model" and "make sim". Make sure the DDR simulation model works well.</para></listitem>
<listitem><para>"make image" to generate the bitstream files.</para></listitem>
<listitem><para>Use capi-utils to program the bitstream "bin" file to the card.</para></listitem>
<listitem><para>Run the application to see whether it works.</para></listitem>
</orderedlist>
<para>Basically SNAP only implemented 1 DDR Bank (or channel) while most cards have 2 to 4 banks. (N250S+ is one of the rare card which has only 1 DDR bank). The main reason was that depending on user's needs, there are two options: the first is to just extend the size of the first bank by adding this 2nd bank on the same DDR memory controller. The other option is to use 2 (or more) memory controllers in parallel to have a higher throughput. This later option means that you will need to duplicate the DDR memory controller in place and this will take twice the place in the design. In this case, the action_wrapper also requires change to add the additional DDR ports. For HLS design, another HLS DDR port should be added into "actions/[YOUR_ACTION]/hw/XXX.CPP". As for an opensource project, everyone is welcomed to add your contribution by implementing it and add it to the SNAP design.</para>
</section>
<section><title>Stage 4: Verify Other IO interface</title>
<para>This step is decided by the card's capabilities and the specific IOs that the card can provide. Like the second or more DDR channels, user can add their code for other IO interface freely. </para>
<para>You can check the result of "snap/actions/hls_memcopy/tests/test_*_throughput.sh" for bandwidth and "snap/actions/hls_latency_eval/test/test*.sh" for latency. </para>
</section>
<section><title>Stage 6: Pressure Test</title>
<para>Prepare bitstream files for basic tests, throughput tests, latency tests, max-power tests. Adding image flashing tests, card reset tests and others. Run them intensively.</para>
</section>
</section>
<section><title>Cleanup and submit</title>
<para>Now a new FPGA card has been enabled to CAPI2.0 SNAP. Cleanup your workspace, check files and submit your work!</para>
<listitem><para>capi-utils is independent. Just create a pull request and assign a reviewer. It can only been merged into master branch after having been reviewed.</para></listitem>
<listitem><para>Submit the pull request of your "capi2-bsp fork" before "snap fork". Assign the reviewer and wait capi2-bsp to be merged into https://github.com/open-power/capi2-bsp master branch</para></listitem>
<listitem><para>Update the submodule pointer to the latest "open-power/capi2-bsp" master and then submit the pull request of your forked snap.</para></listitem>