This stores the most common SPRs in the register file.
This includes CTR and LR and a not yet final list of others.
The register file is set to 64 entries for now. Specific types
are defined that can represent a GPR index (gpr_index_t) or
a GPR/SPR index (gspr_index_t) along with conversion functions
between the two.
On order to deal with some forms of branch updating both LR and
CTR, we introduced a delayed update of LR after a branch link.
Note: We currently stall the pipeline on such a delayed branch,
but we could avoid stalling fetch in that specific case as we
know we have a branch delay. We could also limit that to the
specific case where we need to update both CTR and LR.
This allows us to make bcreg, mtspr and mfspr pipelined. decode1
will automatically force the single issue flag on mfspr/mtspr to
a "slow" SPR.
[paulus@ozlabs.org - fix direction of decode2.stall_in]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
We will want to store some SPRs in the register file using
a set of "extra" registers. This provides a function for
doing the translation along with some SPR definitions.
This isn't used yet
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
We were copying in XER[SO] for the dot-form instructions but not the
explicit compare instructions. Fix this.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The carry is currently internal to execute1. We don't handle any of
the other XER fields.
This creates type called "xer_common_t" that contains the commonly
used XER bits (CA, CA32, SO, OV, OV32).
The value is stored in the CR file (though it could be a separate
module). The rest of the bits will be implemented as a separate
SPR and the two parts reconciled in mfspr/mtspr in latter commits.
We always read XER in decode2 (there is little point not to)
and send it down all pipeline branches as it will be needed in
writeback for all type of instructions when CR0:SO needs to be
updated (such forms exist for all pipeline branches even if we don't
yet implement them).
To avoid having to track XER hazards, we forward it back in EX1. This
assumes that other pipeline branches that can modify it (mult and div)
are running single issue for now.
One additional hazard to beware of is an XER:SO modifying instruction
in EX1 followed immediately by a store conditional. Due to our writeback
latency, the store will go down the LSU with the previous XER value,
thus the stcx. will set CR0:SO using an obsolete SO value.
I doubt there exist any code relying on this behaviour being correct
but we should account for it regardless, possibly by ensuring that
stcx. remain single issue initially, or later by adding some minimal
tracking or moving the LSU into the same pipeline as execute.
Missing some obscure XER affecting instructions like addex or mcrxrx.
[paulus@ozlabs.org - fix CA32 and OV32 for OP_ADD, fix order of
arguments to set_ov]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
There is no reason not to that I can think of
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
To match our one stage execute.
This might change back if we end up adding 2 stages to match the
LSU, but in that case we'll want forwards as well.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Use a function to obtain the integer number and use constants
with the architected numbers. Replace std_match with a case
statement.
This also has the side effect of returning 0 instead of some
random previous result on mfspr of an unknown SPR.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This flips the arbiter muxes on the same cycle as a new request
comes in, thus avoiding a cycle latency.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Consecutive accesses from the same master shouldn't need an IDLE
cycle. Completely remove the IDLE state and switch master when
the bus is idle, but stay on the last selected one between cycles.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Stores only need a single cycle, so we can ack them early if there
isn't an older ack already in the pipeline
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This replaces the simple_ram_behavioural and mw_soc_memory modules
with a common wishbone_bram_wrapper.vhdl that interfaces the
pipelined WB with a lower-level RAM module, along with an FPGA
and a sim variants of the latter.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This adds an output buffer to help with timing and allows the BRAMs
to actually pipeline.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Vivado by default tries to flatten the module hierarchy to improve
placement and timing. However this makes debugging timing issues
really hard as the net names in the timing report can be pretty
bogus.
This adds a generic that can be used to control attributes to stop
vivado from flattening the main core components. The resulting design
will have worst timing overall but it will be easier to understand
what the worst timing path are and address them.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The CR update currently depends on the complete data formatting
mux chain. This makes it source its inputs from a bit earlier in
the chian, thus improving timing a bit
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This doesn't yet pipeline the block RAM, just generate a valid stall
signal so it's compatible with a pipelined master
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The generic PIPELINE_DEPTH can be set to 0 to keep it operating
as a non-pipelined slave, or a larger value indicating
the amount of extra cycles between requests and acks.
It will always generate a valid stall signal, so it can be used
in either mode with a pipelined master (but only in non-pipelined
mode with a non-pipelined master).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
For now ... it reduces the routing pressure on the FPGA
This needs manual adjustment of the address decoder in soc.vhdl, at
least until I can figure out how to deal with std_match
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
# Conflicts:
# soc.vhdl
# Conflicts:
# soc.vhdl
All that needs to be changed now is the size in wishbone_types.vhdl
and the address decoder in soc.vhdl
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We don't yet have a proper snooper for the icache, so for now make
icbi just flush the whole thing
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The instruction works by redirecting fetch to nia+4 (hopefully using
the same adder used to generate LR) and doing a backflush. Along with
being single issue, this should guarantee that the next instruction
only gets fetched after the pipe's been emptied.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We used the variable "way" in the wrong state in the cache when
updating a line valid bit after the end of the wishbone transactions,
we need to use the latched "store_way".
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Clearly separate the 2 stages of load hits, improve naming and
comments, clarify the writeback controls etc...
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>