diff --git a/enable_capi_snap/ch_capi20_snap.xml b/enable_capi_snap/ch_capi20_snap.xml
new file mode 100644
index 0000000..c73da36
--- /dev/null
+++ b/enable_capi_snap/ch_capi20_snap.xml
@@ -0,0 +1,368 @@
+
+
+
+
+Enable CAPI2.0 SNAP
+
+Work on github
+ Snap is also a public Github repository. Create a "fork" (Click the "fork" button) on https://github.com/open-power/snap. Keep working on your own snap fork, when it works, submit a pull request to "open-power/snap" and require merging into the public upstream.
+ git clone https://github.com/[YOUR_USERNAME]/snap
+
+ capi2-bsp is a submodule of snap. You can find it in ".gitmodules" file (this is a hidden file). Please point it to your own capi2-bsp fork. Then
+ git submodule init
+git submodule update
+ Anyway, make sure that "hardware/capi2-bsp" is what you have just generated in last chapter.
+
+
+
+SNAP structure
+ On the FPGA side, there are three parts that need to consider when moving to a new FPGA card. They are (a) BSP, (b) snap_core, (c) DDR memory controller (mig). And there are also some components in SNAP need to be updated for a new FPGA card.
+
+ SNAP also includes the software part. The following picture shows the SNAP github repository folders and files:
+
+ All of the user-developed accelerators are in "actions" directory. There are already some examples there. Each "action" has its "sw", "hw", "tests", and other sub-directories. The hardware part uses "action_wrapper" as its top.
+ Then back to ${SNAP_ROOT}, "software" directory includes libsnap, header files and some tools. "hardware" directory is the main focus. deconfig has the config files for silent testing purpose, and scripts has the menu settings and other scripts.
+
+ How does SNAP work and what are the files used in each step?
+
+
+ make snap_config: The menu to select cards and other options is controlled by "script/Kconfig"
+
+ make model: This step creates a Vivado project. It firstly calls "hardware/setup/create_snap_ip.tcl" to generate the IP files in use, then calls "hardware/setup/create_framework.tcl" to build the project. About create_framework.tcl:
+
+
+ It adds BSP (board support package). In CAPI1.0, it is also called PSL Checkpoint file (b_route_design.dcp) or base_image. It uses the path pointed to b_route_design.dcp and adds it into the design. In CAPI2.0, it will call the make process in capi2-bsp submodule to generate "capi_bsp_wrap" if it doesn't exist. If you have already successfully generated it, this step is skipped. Then "create_framework.tcl" adds the capi_bsp_wrap (xcix or xci file) into the design.
+
+
+ It adds FPGA top files and snap_core files (in hardware/hdl/core).
+
+
+ It adds constrain files: in hardware/setup/${FPGACARD} or in hardware/capi2-bsp/${FPGACARD}
+
+
+ It adds user files (in actions/${ACTION_NAME}/hw). User's action hardware uses top file named "action_wrapper.vhd"
+
+
+ It adds simulation files (in hardware/sim/core) including simulation top files and simulation models. (If "no_sim" is selected in snap_config menu, this step is skipped.)
+
+
+ After above steps, "viv_project" is created. You can open it with Vivado GUI, and check the design hierarchy. And it will call the selected simulator to compile the simulation model.
+
+
+ make image: This step runs synthesis, implementation and bitstream generation. It calls "hardware/setup/snap_build.tcl" and also uses some related tcl scripts to work on "viv_project". In this step, "hardware/build" will be created and the output products like bit images, checkpoints (middle products for debugging) and reports (reports of timing, clock, IO, utilization, etc.) If everything runs well and timing passes, user will get the bitstream files (in "build/Images" sub directory) to program the FPGA card.
+
+
+
+Modifications to snap git repositories
+ For a new FPGA card, the detailed items to update are listed as below.
+
+ Hardware RTL, setup, simulation
+ Software and tools
+ Testing
+ Publishing
+
+
+ The best way is to grep some keywords like "S241" or "AD8K5" under the directories and look for the locations that need modifications.
+
+ If you meet files ending with "_source", like "psl_fpga.vhd_source", that means this file will be pre-processed to generate the output file without "_source" suffix, like "psl_fpga.vhd". There are #ifdef macros or comments like -- only for NVME_USED=TRUE. They help to create a target VHDL/Verilog file with different configurations.
+
+ Below lists the files to change. There may be some differences with new commits in SNAP git repository. Keep in mind they include:
+
+ snap_config and environmental files
+ Hardware: psl_accel and psl_fpga (top) RTL files
+ Hardware: tcl files for the workflow
+ Hardware: Board: xdc files for IO/floorplan/clock/bitstream
+ Hardware: DDR: create DDR Memory controller IP (mig) in create_snap_ip.tcl, create DDR memory sim model, and other xdc files
+ Hardware: Other IP: create_ip, sim model, xdc files
+ Software: New card type, register definition
+ Testing: jenkins
+ Readme and Documents
+
+
+
+ Config files to change
+
+
+
+
+
+
+
+ File name
+
+
+
+
+ Changes to do
+
+
+
+
+
+
+ scripts/Kconfig
+ adding card to the Kconfig menu. Provide Flash information (size/type/user address)
+
+
+ hardware/doc/SNAP-Registers.md
+ SNAP registers for new card - doc
+
+
+ hardware/setup/snap_config.sh
+ SNAP registers - setting
+
+
+
+
+
+
+ RTL/xdc/tcl files to change
+
+
+
+
+
+
+
+ File name
+
+
+
+
+ Changes to do
+
+
+
+
+
+ hardware/hdl/core/psl_accel_${FPGACARD}.vhd_source specific to card
+ hardware/hdl/core/psl_accel_types.vhd_sourcespecific to card
+ hardware/hdl/core/psl_fpga_${FPGACARD}.vhd_source specific to card
+ hardware/setup/${FPGACARD}/capi_bsp_pblock.xdc specific to card
+ hardware/setup/${FPGACARD}/snap_${FPGACARD}.xdc specific to card
+ hardware/setup/${FPGACARD}/snap_ddr4pins.xdc specific to card
+ hardware/setup/build_mcs.tcldeclare card name
+ hardware/setup/create_framework.tcldeclare card name
+ hardware/setup/create_snap_ip.tcldeclare card name and the IPs in use
+ hardware/setup/flash_mcs.tcldeclare card name
+ hardware/setup/snap_bitstream_post.tcldeclare card name
+ hardware/setup/snap_bitstream_pre.tcldeclare card name
+ hardware/setup/snap_bitstream_step.tcldeclare card name
+ hardware/setup/snap_impl_step.tcldeclare card name
+ hardware/sim/ddr4_dimm_???.svDDR memory model for simulation. Please get the information about how many DDR chips are connected together, the density and data width of each chip, and whether there is one chip is used for ECC (redundant). You can take an existing one as a template and modify.
+ hardware/sim/top_capi?0.sv_sourceInstantiate the DDR memory model
+ hardware/snap_check_psl (Only for CAPI1.0)declare card name
+
+
+
+
+
+ Software files to change
+
+
+
+
+
+
+
+ File name
+
+
+
+
+ Changes to do
+
+
+
+
+
+ software/lib/snap.cdeclare card name
+ software/tools/snap_find_carddeclare card name + SUBSYSTEM_ID
+ software/include/snap_regs.hSNAP registers - setting
+
+
+
+
+
+ Other files to change
+
+
+
+
+
+
+
+ File name
+
+
+
+
+ Changes to do
+
+
+
+
+
+ actions/scripts/snap_jenkins.shjenkins tests (optional)
+ defconfig/{FPGACARD}*.defconfigFor silent jenkins testing (optional)
+ README.mdAnnounce a new card is supported
+
+
+
+
+Update capi-utils
+ capi-utils is the third git repository that needs a few modifications. Same as before, fork it, make the modifications and submit a pull request.
+ git clone https://github.com/[YOUR_USERNAME]/capi-utils
+ There is only one file to be modified: "psl-devices". Add a new line, for example
+ 0x0665 U200 Xilinx 0x1002000 64 SPIx4
+ The first column is the SUBSYSTEM_ID, the second column is the Card name, the third is the FPGA Chip Vendor, then it is the User Image starting address on the flash. For SPI device, size of block is 64Bytes. "SPIx4" is the flash interface type. It may also be "DPIx16" or "SPIx8".
+ "SPIx8" uses two bitstreams so another starting address also needs to be provided. And when you call "capi-flash-script" to program the flash, it needs two input bitstream files (primary and secondary).
+
+
+Strategy to enable a new card
+
+ To enable a new card on SNAP, complete following tasks one by one.
+
+ Stage 1: Verify PCIe interface
+
+ Generate capi_bsp_wrap in capi2-bsp.
+ Make modifications to snap git repository as described above.
+ Select an action example without DDR, for example: hls_helloworld.
+ Go through the "make model" and "make image" processes and build the bitstream files.
+ Plug the card onto Power9 server and connect a JTAG/USB cable to a laptop. Install Vivado Lab on this laptop (it requires Windows or Linux operating system). Start Vivado Lab tool, open Hardware manager.
+ Power on the server. You will see the FPGA target is recognized by Vivado Lab tool.
+ Program the generated bitstream files (bin or mcs) to the card. On Vivado Lab tool, select the FPGA chip and right-click, choose "Add Configuration Memory Device..." and program the bin/mcs files to the flash. See in picture and
+ Wait it done (It may take 10 minutes). Unplug the JTAG/USB cable, reboot the server.
+ After the server is booted, log into OS, run lspci to see if the card is there. (Usually with Device ID 0x0477). Then download snap, capi-utils, libcxl (from github). Go to snap directory, make apps and run the application.
+
+ There is another way to replace step 6 to 8. We call it "Fast program bit-file when power on". Prepare the bit file on laptop in advance. Not like bin/mcs files which are for the flash, the bit file is used to program the FPGA chip directly. When the server is powered on, after Vivado Lad sees the FPGA, right click the device, program device ... and select the bit file immediately. This action only takes about 10 seconds and can be done before hostboot on the server starts to scan PCIe devices.
+ You should be aware of the fact that because only FPGA chip is programmed, (the flash memory is empty), when the server is powered off or reboot, FPGA doesn't have electricity so the programming in FPGA chip will be lost.
+
+
+
+
+
+
+ When you download and install Vivado Lab, please pick up as same version as the Vivado (SDx) that you are using to build images.
+
+
+ Seeing 0477 by "lspci" is the most important milestone. If not, please do following checking:
+
+ Check dmesg. Run "dmesg > dmesg.log" and search "cxl" in dmesg.log file.
+ Check file "/sys/firmware/opal/msglog" to see whether there are link training failed messages. A successful message looks like this, which means this PCIe device has been scanned and recognized. The number followed "PHB#" is the PCIe device identifier in the format of "domain:bus:slot.func". You can see it by "lspci" also.
+ [ 63.403485191,5] PHB#0000:00:00.0 [ROOT] 1014 04c1 R:00 C:060400 B:01..01 SLOT=CPU1 Slot2 (16x)
+[ 63.403572553,5] PHB#0000:01:00.0 [EP ] 1014 0477 R:02 C:1200ff ( device) LOC_CODE=CPU1 Slot2 (16x)
+
+ Check create_ip.tcl in capi2-bsp/[FPGACARD]/tcl and check the configuration of PCIHIP core.
+
+ If your PCIe device has been recognized as CAPI, do "ls /dev/cxl" and you can see "afu*" devices. Then your application software can open the device like operating an ordinary file.
+ Some other useful commands to check PCIe config (with the right PCIe identifier "domain:bus:slot.func")
+ sudo lspci -s 0000:01:00.0 -vvv
+ For example, you can check the settings coded in Xilinx PCIHIP core, like SUBSYSTEM_ID:
+ 0000:01:00.0 Processing accelerators: IBM Device 0477 (rev 02) (prog-if ff)
+ Subsystem: IBM Device 0660
+ Link Speed
+ LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
+ Vital Product Data which was coded in capi_vsec.vhdl
+ Capabilities: [b0] Vital Product Data
+Product Name: U200 PCIe CAPI2 Adapter
+Read-only fields:
+ [PN] Part number: Xilinx.U200
+ [V1] Vendor specific: 0000000000000000
+ [V2] Vendor specific: 0000000000000000
+ [V3] Vendor specific: 0000000000000000
+ [V4] Vendor specific: 0000000000000000
+ [RV] Reserved: checksum good, 3 byte(s) reserved
+End
+ And see VSEC and kernel module:
+ Capabilities: [400 v1] Vendor Specific Information: ID=1280 Rev=0 Len=080 <?>
+Kernel driver in use: cxl-pci
+Kernel modules: cxl
+
+
+ Stage 2: Verify Flash interface
+ Use capi-utils to program the bitstream files. If it succeeds, it proves that the Flash interface has been configured correctly. After this step, you can get rid of JTAG connector and use "capi-flash-script" to program the FPGA bitstreams.
+ The mechanic behind "capi-flash-script" is:
+ There is a flash controller on FPGA (in capi_bsp_wrap), and it connects to PCIe config space. The flash controller exposes four VSEC registers to allow host system to control. They are "Flash Address Register", "Flash Size Register", "Flash Status/Control Register" and "Flash Data Port". See in Coherent Accelerator Interface Architecture, Chapter 12.3, "CAIA Vendor-Specific Extended Capability Structure". So capi-utils src C file reads FPGA bitstream "bin" file, and writes the bytes to VSEC "Flash Data Port" register. So the bytes are sent to PCIe, through Flash controller and finally arrive to flash memory on the card.
+
+
+ Stage 3: Verify DDR interface
+
+ Select another action example (hdl_example with DDR) or hls_memcopy.
+ "make model" and "make sim". Make sure the DDR simulation model works well.
+ "make image" to generate the bitstream files.
+ Use capi-utils to program the bitstream "bin" file to the card.
+ Run the application to see whether it works.
+
+ Basically SNAP only implemented 1 DDR Bank (or channel) while most cards have 2 to 4 banks. (N250S+ is one of the rare card which has only 1 DDR bank). The main reason was that depending on user's needs, there are two options: the first is to just extend the size of the first bank by adding this 2nd bank on the same DDR memory controller. The other option is to use 2 (or more) memory controllers in parallel to have a higher throughput. This later option means that you will need to duplicate the DDR memory controller in place and this will take twice the place in the design. In this case, the action_wrapper also requires change to add the additional DDR ports. For HLS design, another HLS DDR port should be added into "actions/[YOUR_ACTION]/hw/XXX.CPP". As for an opensource project, everyone is welcomed to add your contribution by implementing it and add it to the SNAP design.
+
+
+
+
+ Stage 4: Verify Other IO interface
+ This step is decided by the card's capabilities and the specific IOs that the card can provide. Like the second or more DDR channels, user can add their code for other IO interface freely.
+
+
+
+ Stage 5: Performance Validation
+ You can check the result of "snap/actions/hls_memcopy/tests/test_*_throughput.sh" for bandwidth and "snap/actions/hls_latency_eval/test/test*.sh" for latency.
+
+
+ Stage 6: Pressure Test
+ Prepare bitstream files for basic tests, throughput tests, latency tests, max-power tests. Adding image flashing tests, card reset tests and others. Run them intensively.
+
+
+Cleanup and submit
+ Now a new FPGA card has been enabled to CAPI2.0 SNAP. Cleanup your workspace, check files and submit your work!
+ capi-utils is independent. Just create a pull request and assign a reviewer. It can only been merged into master branch after having been reviewed.
+ Submit the pull request of your "capi2-bsp fork" before "snap fork". After capi2-bsp is merged intohttps://github.com/open-power/capi2-bsp master branch, update the submodule pointer to the latest "open-power/capi2-bsp" master and then submit the pull request of your forked snap.
+
+
+
+
+
+
diff --git a/enable_capi_snap/ch_enable_snap.xml b/enable_capi_snap/ch_enable_snap.xml
deleted file mode 100644
index 0d91bcf..0000000
--- a/enable_capi_snap/ch_enable_snap.xml
+++ /dev/null
@@ -1,418 +0,0 @@
-
-
-
-
- Enable a FPGA card in SNAP
- On the FPGA side of SNAP diagram, there are three parts that need to consider when moving to a new FPGA card. They are (a) PSL, (b) PSL/AXI bridge (snap_core), (c) DDR memory controller (mig). And there are also some components in SNAP need to be updated for a new FPGA card. The following sections introduced the the structure of SNAP folders and scripts and the steps.
- SNAP structure
- Firstly, clone the repository:
-
- git clone https://github.com/open-power/snap
- git submodule init
- git submodule update
-
-
-
-
- All of the user-developed accelerators should be put in "actions" directory. There are already some examples there. Each "action" has its "sw", "hw", "tests", and other sub-directories.
- Then back to ${SNAP_ROOT}, "software" directory includes libsnap, header files and some tools. "hardware" directory is the main focus. deconfig has the config files for silent testing purpose, and scripts has the menu settings and other scripts.
-
- How does SNAP work and what are the files used in each step?
-
-
- make snap_config: The menu to select cards and other options is controlled by "script/Kconfig"
-
- make model: This step creates a Vivado project. It firstly calls "hardware/setup/create_snap_ip.tcl" to generate the IP files in use, then calls "hardware/setup/create_framework.tcl" to build the project. About create_framework.tcl:
-
-
- It adds BSP (board support package). In CAPI1.0, it is also called PSL Checkpoint file (b_route_design.dcp) or base_image. It uses the path pointed to b_route_design.dcp and adds it into the design. In CAPI2.0, it will call the make process in capi2-bsp submodule. Submodule "capi2-bsp" reads the encrypted PSL source files, adds PCIe and Flash logic, packs them into capi2_bsp_wrap.xcix (IP container file). Then "create_framework.tcl" adds the capi2_bsp_wrap.xcix into the design.
-
-
- It adds FPGA top files and snap_core files (in hardware/hdl/core).
-
-
- It adds constrain files: in hardware/setup/${FPGACARD} or in hardware/capi2-bsp/${FPGACARD}
-
-
- It adds user files (in actions/${ACTION_NAME}/hw). User's action hardware uses top file named "action_wrapper.vhd"
-
-
- It adds simulation files (in hardware/sim/core) including simulation top files and simulation models. (If "no_sim" is selected in snap_config menu, this step is skipped.)
-
-
- After above steps, "viv_project" is created. You can open it with Vivado GUI, and check the design hierarchy. And it will call the selected simulator to compile the simulation model.
-
-
- make image: This step runs synthesis, implementation and bitstream generation. It calls "hardware/setup/snap_build.tcl" and also uses some related tcl scripts to work on "viv_project". In this step, "hardware/build" will be created and the output products like bit images, checkpoints (middle products for debugging) and reports (reports of timing, clock, IO, utilization, etc.) If everything runs well and timing passes, user will get the bitstream files (in "Images" sub directory) to program the FPGA card.
-
-
-
- BSP (board support package) module
-
- For CAPI1.0, base_image contains surrounding logic and the kernel logic:
-
- PCIe hard IP core (pcie3_ultrascale_0)
- Flash Controller (psl_flash)
- VSEC: Vendor Specific Extended Capability (psl_vsec)
- Xilinx MultiBoot control logic (psl_xilmltbt)
- PSL kernel logic (psl)
-
- The interface between base_image and AFU(psl_accel) has 5 groups of signals, described in PSL spec CAPI1.0 PSL/AFU interface Spec.
- The interface between base_image and Chip IOs are card specific, and the information need to be provided by Card Vendor. Generally, they include:
-
- Flash interface (usually DPIx16)
- PCIe interface: perst, refclk, TX and RX data lanes
- Peripheral IPs: I2C, LED, DDR, Ethernet, etc.
-
- Marked in light orange color, you can download the entire base_image (b_route_design.dcp) from OpenPower Portal.
-
-
- For CAPI2.0, the structure is similar, but the PSL9 logic (marked in light orange color) is provided as an encrypted Zip package. It can be downloaded from OpenPower Portal and put in "capi2-bsp/psl" directory. Then it uses the make process in capi2-bsp to generate an IP container file (capi_bsp_wrap.xcix). Please refer to the README file at https://github.com/open-power/capi2-bsp for more details.
- CAPI2.0 cards are using SPI Flash interface: SPIx4 or dual SPIx4 (also mentioned as SPIx8). For PCIe Gen3, it uses 16 lanes. For PCIe Gen4, it uses 8 lanes. The interface of PSL9 has 6 groups of signals. Please refer to CAPI2.0 PSL/AFU interface Spec for the details.
- The logic in snap_core (CAPI2.0) implements the data path with DMA interface. Buffer interface is not used.
- The above two figures apply to both HDK development and SNAP framework. The difference is, for HDK developers, they work on the AFU by themselves. For SNAP developers, they make use of the snap_core logic and only work on action_wrapper. The AFU part for SNAP developers contains following blocks:
-
-
- AFU logic RTL files are open-sourced. Developer can make modifications for their own purpose, like adding multiple DDR channels, adding NVMe and Ethernet controllers.
-
-
-
- Modifications to snap git repositories
- For a new FPGA card, the detailed items to update are:
-
- Preparations
- Hardware RTL, setup, simulation
- Software and tools
- Testing
- Publishing
-
-
- Preppartions
- First, give a FPGACARD name. It should start from the company's name, following with the card ID and be short. For example. ADKU3 = Alpha-Data ADM-PCIE-KU3. Get follow information from the card vendor.
-
-
- Information to collect
-
-
-
-
-
-
-
- Item
-
-
-
-
- Description
-
-
-
-
-
-
- FPGACARD
- Short card name used in SNAP
-
-
- FPGACHIP
- FPGA part name, for example, xcvu9p-fsgd2104-2L-e
-
-
- Flash Type
- Flash chip that attached to FPGA, for example mt28gu01gaax1e-bpi-x16. And the related xdc files for FPGA config.
-
-
- DDR MC IP
- Short card name used in SNAP
-
-
- FPGACARD
- DDR memory controller Vivado IP tcl/xdc file.
-
-
- Other peripherals
- NVMe IP, Ethernet IP and so on (Optional)
-
-
- IO pins
- PACKAGE_PIN for base_image or bsp: flash, pcie, i2c etc.
- PACKAGE_PIN for peripheral IPs.
-
-
-
-
-
- SNAP environment updates
- The best way is to grep some keywords like "S241" or "AD8K5" under the directories and look for the locations that need modifications.
-
- If you meet files ending with "_source", like "psl_fpga.vhd_source", that means this file will be pre-processed to generate the output file without "_source" suffix, like "psl_fpga.vhd". There are #ifdef macros or comments like -- only for NVME_USED=TRUE. They help to create a target VHDL/Verilog file with different configurations.
-
- Below lists the files to change. There may be some differences with new commits in SNAP git repository. Keep in mind they include:
-
- snap_config and environmental files
- Hardware: psl_accel and psl_fpga (top) RTL files
- Hardware: tcl files for the workflow
- Hardware: Board: xdc files for IO/floorplan/clock/bitstream
- Hardware: DDR: create_ip, sim model, xdc files
- Hardware: Other IP: create_ip, sim model, xdc files
- Software: New card type, register definition
- Testing: jenkins
- Readme and Documents
-
-
-
- For CAPI1.0, you need to generate a new PSL checkpoint file and upload it to OpenPower Portal. Section describes the details.
- For CAPI2.0, you need to add a ${FPGACARD} directory in capi2-bsp git repository. Copy an existing folder as a start and follow the README file.
- Make sure the information in xdc/tcl files are permitted to be open-source.
- Send email to OpenPower Acceleration Workgroup or contact your representative to apply for a subsystem device ID for the new card. For example, ADKU3 uses 0x0605. S241 uses 0x0660.
- You also need to update https://github.com/ibm-capi/capi-utils to allow capi-flash-script to program this new card. Subsystem ID will be used there. It is also used in snap/software/tools/snap_find_card.
-
-
-
-
- Config files to change
-
-
-
-
-
-
-
- File name
-
-
-
-
- Changes done
-
-
-
-
-
-
- scripts/Kconfig
- adding card to the Kconfig menu. Provide Flash information (size/type/user address)
-
-
- hardware/doc/SNAP-Registers.md
- SNAP registers for new card - doc
-
-
- hardware/setup/snap_config.sh
- SNAP registers - setting
-
-
-
-
-
-
- RTL/xdc/tcl files to change
-
-
-
-
-
-
-
- File name
-
-
-
-
- Changes done
-
-
-
-
-
- hardware/hdl/core/psl_accel_${FPGACARD}.vhd_source specific to card
-hardware/hdl/core/psl_accel_types.vhd_sourcespecific to card
-hardware/hdl/core/psl_fpga_${FPGACARD}.vhd_source specific to card
-hardware/setup/${FPGACARD}/capi_bsp_pblock.xdc specific to card
-hardware/setup/${FPGACARD}/snap_${FPGACARD}.xdc specific to card
-hardware/setup/${FPGACARD}/snap_ddr4pins.xdc specific to card
-hardware/setup/build_mcs.tcldeclare card name
-hardware/setup/create_framework.tcldeclare card name
-hardware/setup/create_snap_ip.tcldeclare card name and its IP
-hardware/setup/flash_mcs.tcldeclare card name
-hardware/setup/snap_bitstream_post.tcldeclare card name
-hardware/setup/snap_bitstream_pre.tcldeclare card name
-hardware/setup/snap_bitstream_step.tcldeclare card name
-hardware/setup/snap_impl_step.tcldeclare card name
-hardware/snap_check_psldeclare card name
-
-
-
-
-
- Software files to change
-
-
-
-
-
-
-
- File name
-
-
-
-
- Changes done
-
-
-
-
-
-software/lib/snap.cdeclare card name
-software/tools/snap_find_carddeclare card name + id
-software/include/snap_regs.hSNAP registers - setting
-
-
-
-
-
- Other files to change
-
-
-
-
-
-
-
- File name
-
-
-
-
- Changes done
-
-
-
-
-
-actions/scripts/snap_jenkins.shjenkins tests (optional)
-defconfig/{FPGACARD}*.defconfigFor silent jenkins testing (optional)
-README.mdAnnounce a new card is supported
-
-
-
-
-
-
- Strategy to enable a new card
-
- To enable a new card on SNAP, please take following tasks one by one.
-
- Stage 1: Verify PCIe interface
-
- Make modifications to snap git repository (and capi2-bsp) as described above.
- Select an action example without DDR, for example: hls_helloworld.
- Go through the "make model" and "make image" processes and get the bitstream files.
- Plug the card onto Power8/Power9 server and power on.
- Use Jtag to program the generated bitstream files (bin or mcs) to the card. You need a laptop or workstation installed Vivado Lab Edition, and connect a JTAG/USB cable to the card. Open Hardware Manager, open target, select the FPGA chip and right-click, choose "Add Configuration Memory Device..." and program the bitstream files. See in picture and
- Wait it done, unplug the JTAG/USB cable, reboot the server.
- When the server is booted, install snap, capi-utils, libcxl. Run lspci to see if the card is there. (Usually with ID 0x0477). Then go to snap directory, make apps and run the application.
-
-
-
-
-
-
- When you download and install Vivado Lab Edition, please pick up as same version as the Vivado (SDx) that you are using to build images.
-
-
- Stage 2: Verify Flash interface
- Use capi-utils to program the bitstream files. If it succeeds, it proves that the Flash interface has been configured correctly.
-
-
- Stage 3: Verify DDR interface
-
- Select another action example (hdl_example with DDR) or hls_memcopy.
- "make model" and "make sim". Make sure the DDR simulation model works well.
- "make image" to generate the bitstream files.
- Use capi-utils to program the bitstream files to the card.
- Run the application to see if it works.
-
-
-
-
-
- Stage 4: Verify Other IO interface
- This step is decided by the card vendor and the specific IOs that the card provide.
-
-
-
- Stage 5: Performance Validation
- You can check the result of "snap/actions/hls_memcopy/tests/test_*_throughput.sh" for bandwidth and "snap/actions/hls_latency_eval/test/test*.sh" for latency.
-
-
- Stage 6: Pressure Test
- Prepare bitstream files for basic tests, throughput tests, latency tests, max-power tests. Adding image flashing tests, card reset tests and others. Run them intensively.
-
-
-
-
-
-
-
-