Our SPI controller sends 8 dummy clocks at boot which Ben
added for some Xilinx boards. This should be harmless but
it is confusing the flash testbench in the Caravel project.
Add a parameter so it can be overridden at the top level.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
We want much smaller caches and tlbs when building for sky130, so
allow the toplevel file to override the defaults.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Make sure the SPRs are initialized and we can't read X state.
(Mikey: rebased and added console/bin file for testing)
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
If the DAR and DSISR are read before they are written, we assert with:
register_file.vhdl:55:25:@60195ns:(report note): Writing GPR 09 00000000XXXXXXXX
register_file.vhdl:61:17:@60195ns:(assertion failure): Assertion violation
This initialises DAR/DSISR to avoid this.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Check that stb, cyc and ack are never undefined. While not really needed
here, this also tests if --pragma synthesis_off/--pragma synthesis_on
works on all the tools we use.
Signed-off-by: Anton Blanchard <anton@linux.ibm.com>
The idea here is that we can have multiple instructions in progress at
the same time as long as they all go to the same unit, because that
unit will keep them in order. If we get an instruction for a
different unit, we wait for all the previous instructions to finish
before executing it. Since the loadstore unit is the only one that is
currently pipelined, this boils down to saying that loadstore
instructions can go ahead while l_in.in_progress = 1 but other
instructions have to wait until it is 0.
This gives a 2% increase on coremark performance on the Arty A7-100
(from ~190 to ~194).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This makes loadstore use a 3-stage pipeline. For now, only one
instruction goes through the pipe at a time. Completion and writeback
are still combinatorial off the valid signal back from the dcache, so
performance should be the same as before. In future it should be able
to sustain one load or store per cycle provided they hit in the
dcache.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This fixes two bugs which show up when multiple operations are in
flight in the dcache, and adds a 'hold' input which will be needed
when loadstore1 is pipelined.
The first bug is that dcache needs to sample the data for a store on
the cycle after the store request comes in even if the store request
is held up because of a previous request (e.g. if the previous request
is a load miss or a dcbz).
The second bug is that a load request coming in for a cache line being
refilled needs to be handled immediately in the case where it is for
the row whose data arrives on the same cycle. If it is not, then it
will be handled as a separate cache miss and the cache line will be
refilled again into a different way, leading to two ways both being
valid for the same tag. This can lead to data corruption, in the
scenario where subsequent writes go to one of the ways and then that
way gets displaced but the other way doesn't. This bug could in
principle show up even without having multiple operations in flight in
the dcache.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This moves the logic for redirecting fetching and writing SRR0 and
SRR1 to writeback. The aim is that ultimately units other than
execute1 can send their interrupts to writeback along with their
instruction completions, so that there can be multiple instructions
in flight without needing execute1 to keep track of the address
of each outstanding instruction.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This changes the bypass path. Previously it went from after
execute1's output to after decode2's output. Now it goes from before
execute1's output register to before decode2's output register. The
reason is that the new path will be simpler to manage when there are
possibly multiple instructions in flight. This means that the
bypassing can be managed inside decode2 and control.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This changes the way GPR hazards are detected and tracked. Instead of
having a model of the pipeline in gpr_hazard.vhdl, which has to mirror
the behaviour of the real pipeline exactly, we now assign a 2-bit tag
to each instruction and record which GSPR the instruction writes.
Subsequent instructions that need to use the GSPR get the tag number
and stall until the value with that tag is being written back to the
register file.
For now, the forwarding paths are disabled. That gives about a 8%
reduction in coremark performance.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This uses the instruction doubling machinery to convert conditional
branch instructions that update both CTR and LR (e.g., bdnzl, bdnzlrl)
into two instructions, of which the first updates CTR and determines
whether the branch is taken, and the second updates LR and does the
redirect if necessary.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This uses the instruction-doubling machinery to send load with update
instructions down to loadstore1 as two separate ops, rather than
one op with two destinations. This will help to simplify the value
tracking mechanisms.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements a cache in fetch1, where each entry stores the address
of a simple branch instruction (b or bc) and the target of the branch.
When fetching sequentially, if the address being fetched matches the
cache entry, then fetching will be redirected to the branch target.
The cache has 1024 entries and is direct-mapped, i.e. indexed by bits
11..2 of the NIA.
The bus from execute1 now carries information about taken and
not-taken simple branches, which fetch1 uses to update the cache.
The cache entry is updated for both taken and not-taken branches, with
the valid bit being set if the branch was taken and cleared if the
branch was not taken.
If fetching is redirected to the branch target then that goes down the
pipe as a predicted-taken branch, and decode1 does not do any static
branch prediction. If fetching is not redirected, then the next
instruction goes down the pipe as normal and decode1 does its static
branch prediction.
In order to make timing, the lookup of the cache is pipelined, so on
each cycle the cache entry for the current NIA + 8 is read. This
means that after a redirect (from decode1 or execute1), only the third
and subsequent sequentially-fetched instructions will be able to be
predicted.
This improves the coremark value on the Arty A7-100 from about 180 to
about 190 (more than 5%).
The BTC is optional. Builds for the Artix 7 35-T part have it off by
default because the extra ~1420 LUTs it takes mean that the design
doesn't fit on the Arty A7-35 board.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Using the main adder for comparisons has the disadvantage of creating
a long path from the CA/OV bit forwarding to v.busy via the carry
input of the adder, the comparison result, and determining whether a
trap instruction would trap. Instead we now have dedicated
comparators for the high and low words of a_in vs. b_in, and combine
their results to get the signed and unsigned comparison results.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This breaks up the enormous if .. elsif .. case .. elsif statement in
execute1 in order to try to make it simpler and more understandable.
We now have decode2 deciding whether the instruction has a value to be
written back to a register (GPR, GSPR, FPR, etc.) rather than
individual cases in execute1 setting result_en. The computation of
the data to be written back is now independent of detection of various
exception conditions. We now have an if block determining if any
exception condition exists which prevents the next instruction from
being executed, then the case statement which performs actions such as
setting carry/overflow bits, determining if a trap exception exists,
doing branches, etc., then an if statement for all the r.busy = 1
cases (continuing execution of an instruction which was started in a
previous cycle, or writing SRR1 for an interrupt).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds an explicit multiplexer feeding v.e.write_data in execute1,
with the select lines determined in the previous cycle based on the
insn_type. Similarly, for multiply and divide instructions, there is
now an explicit multiplexer.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This does the addition of the instruction NIA and the branch offset
after the register at the output of execute1 rather than before.
The propagation through the adder was showing up as a critical path
on the A7-100. Performance is unaffected and now it makes timing.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This makes it simpler to work out when to deliver a FPU unavailable
interrupt. This also means we can get rid of the OP_FPLOAD and
OP_FPSTORE insn_type values.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This does the addition of NIA plus the branch offset from the
instruction after a clock edge, in order to ease timing, as the path
from the icache RAM through the adder in decode1 to the NIA register
in fetch1 was showing up as a critical path.
This adds one extra cycle of latency when redirecting fetch because of
a predicted-taken branch.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Lq and stq are tested in both BE and LE modes (though only 64-bit
mode) by the 'modes' test.
Lqarx and stqcx. are tested by the 'reservation' test in LE mode mode
(64-bit).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This makes timing easier and also means that store floating-point
single precision instructions no longer need to take an extra cycle.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This implements the lq, stq, lqarx and stqcx. instructions.
These instructions all access two consecutive GPRs; for example the
"lq %r6,0(%r3)" instruction will load the doubleword at the address
in R3 into R7 and the doubleword at address R3 + 8 into R6. To cope
with having two GPR sources or destinations, the instruction gets
repeated at the decode2 stage, that is, for each lq/stq/lqarx/stqcx.
coming in from decode1, two instructions get sent out to execute1.
For these instructions, the RS or RT register gets modified on one
of the iterations by setting the LSB of the register number. In LE
mode, the first iteration uses RS|1 or RT|1 and the second iteration
uses RS or RT. In BE mode, this is done the other way around. In
order for decode2 to know what endianness is currently in use, we
pass the big_endian flag down from icache through decode1 to decode2.
This is always in sync with what execute1 is using because only rfid
or an interrupt can change MSR[LE], and those operations all cause
a flush and redirect.
There is now an extra column in the decode tables in decode1 to
indicate whether the instruction needs to be repeated. Decode1 also
enforces the rule that lq with RT = RT and lqarx with RA = RT or
RB = RT are illegal.
Decode2 now passes a 'repeat' flag and a 'second' flag to execute1,
and execute1 passes them on to loadstore1. The 'repeat' flag is set
for both iterations of a repeated instruction, and 'second' is set
on the second iteration. Execute1 does not take asynchronous or
trace interrupts on the second iteration of a repeated instruction.
Loadstore1 uses 'next_addr' for the second iteration of a repeated
load/store so that we access the second doubleword of the memory
operand. Thus loadstore1 accesses the doublewords in increasing
memory order. For 16-byte loads this means that the first iteration
writes GPR RT|1. It is possible that RA = RT|1 (this is a legal
but non-preferred form), meaning that if the memory operand was
misaligned, the first iteration would overwrite RA but then the
second iteration might take a page fault, leading to corrupted state.
To avoid that possibility, 16-byte loads in LE mode take an
alignment interrupt if the operand is not 16-byte aligned. (This
is the case anyway for lqarx, and we enforce it for lq as well.)
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This compares the address being fetched with the contents of a
register that can be set via DMI, and if they match, stops the
logging. Since this works on the address being fetched rather than
executed, it is subject to false positives.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>