Several of the testbenches have stimuli code divided into sections preceded with a header comment explaining
what is being tested. These sections have been made into VUnit test cases. The default behavior of VUnit is
to run each test case in a separate simulation which comes with a number of benefits:
* A failing test case doesn't prevent other test cases to be executed
* Test cases are independent. A test case cannot fail as a side-effect to a problem with another test case
* Test execution can be more parallelized and the overall test execution time reduced
Signed-off-by: Lars Asplund <lars.anders.asplund@gmail.com>
This commit also removes the dependencies these testbenches have on VHPIDIRECT.
The use of VHPIDIRECT limits the number of available simulators for the project. Rather than using
foreign functions the testbenches can be implemented entirely in VHDL where equivalent functionality exists.
For these testbenches the VHPIDIRECT-based randomization functions were replaced with VHDL-based functions.
The testbenches recognized by VUnit can be executed in parallel threads for better simulation performance using
the -p option to the run.py script
Signed-off-by: Lars Asplund <lars.anders.asplund@gmail.com>
The VUnit run script will find all VHDL files based on given search patterns, figure out their dependencies, and support incremental compile based on the dependencies.
The same script is used for all VUnit supported simulators. Supporting several simulators simplifies the adoption of this project.
At this point only compilation is performed. Coming commits will enable simulation of VHDL testbenches.
Signed-off-by: Lars Asplund <lars.anders.asplund@gmail.com>
This adds litesdcard.v generated from the litex/litesdcard project,
along with logic in top-arty.vhdl to connect it into the system.
There is now a DMA wishbone coming in to soc.vhdl which is narrower
than the other wishbone masters (it has 32-bit data rather than
64-bit) so there is a widening/narrowing adapter between it and the
main wishbone master arbiter.
Also, litesdcard generates a non-pipelined wishbone for its DMA
connection, which needs to be converted to a pipelined wishbone. We
have a latch on both the incoming and outgoing sides of the wishbone
in order to help make timing (at the cost of two extra cycles of
latency).
litesdcard generates an interrupt signal which is wired up to input 3
of the ICS (IRQ 19).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This makes the icache snoop writes to memory in the same way that the
dcache does, thus making DMA cache-coherent for the icache as well as
the dcache.
This also simplifies the logic for the WAIT_ACK state by removing the
stbs_done variable, since is_last_row(r.store_row, r.end_row_ix) can
only be true when stbs_done is true.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Since the expression is_last_row(r1.store_row, r1.end_row_ix) can only
be true when stbs_done is true, there is no need to include stbs_done
in the expression for the reload being completed, and hence no need to
compute stbs_done in the RELOAD_WAIT_ACK state.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds a path where the wishbone that goes out to memory and I/O
also gets fed back to the dcache, which looks for writes that it
didn't initiate, and invalidates any cache line that gets written to.
This involves a second read port on the cache tag RAM for looking up
the snooped writes, and effectively a second write port on the cache
valid bit array to clear bits corresponding to snoop hits.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Update documentation to reference fusesoc init for Xilinx boards, for
those like me who have never used fusesoc before. Add a reference to the
board files for Digilent boards and comment on perhaps installing them
for other boards as appropriate.
Signed-off-by: Antony Vennard <antony@vennard.ch>
Our SPI controller sends 8 dummy clocks at boot which Ben
added for some Xilinx boards. This should be harmless but
it is confusing the flash testbench in the Caravel project.
Add a parameter so it can be overridden at the top level.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
We want much smaller caches and tlbs when building for sky130, so
allow the toplevel file to override the defaults.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
The protocol used by the spi bridge firmware changed as of openocd
v0.11. As this is the version packaged by Debian Bullseye, add the
firmware for convince.
Signed-off-by: Joel Stanley <joel@jms.id.au>
This adds a GPIO controller which provides 32 bits of I/O. The
registers are modelled on the set used by the gpio-ftgpio010.c driver
in the Linux kernel. Currently there is no interrupt capability
implemented, though an interrupt line from the GPIO subsystem to the
XICS has been connected.
For the Arty A7 board, GPIO lines 0 to 13 are connected to the pins
labelled IO0 to IO13 on the "shield" connector, GPIO lines 14 to 29
connect to IO26 to IO41, GPIO line 30 connects to the pin labelled A
(aka IO42), and GPIO line 31 is connected to LED 7.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Make sure the SPRs are initialized and we can't read X state.
(Mikey: rebased and added console/bin file for testing)
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
If the DAR and DSISR are read before they are written, we assert with:
register_file.vhdl:55:25:@60195ns:(report note): Writing GPR 09 00000000XXXXXXXX
register_file.vhdl:61:17:@60195ns:(assertion failure): Assertion violation
This initialises DAR/DSISR to avoid this.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Check that stb, cyc and ack are never undefined. While not really needed
here, this also tests if --pragma synthesis_off/--pragma synthesis_on
works on all the tools we use.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
The idea here is that we can have multiple instructions in progress at
the same time as long as they all go to the same unit, because that
unit will keep them in order. If we get an instruction for a
different unit, we wait for all the previous instructions to finish
before executing it. Since the loadstore unit is the only one that is
currently pipelined, this boils down to saying that loadstore
instructions can go ahead while l_in.in_progress = 1 but other
instructions have to wait until it is 0.
This gives a 2% increase on coremark performance on the Arty A7-100
(from ~190 to ~194).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This makes loadstore use a 3-stage pipeline. For now, only one
instruction goes through the pipe at a time. Completion and writeback
are still combinatorial off the valid signal back from the dcache, so
performance should be the same as before. In future it should be able
to sustain one load or store per cycle provided they hit in the
dcache.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>