Updated the documentation to follow the latest version,
microwatt-2022.08 buildroot and latest Linux kernel v6.11.0-rc3.
Signed-off-by: Yunseong Kim <yskelg@gmail.com>
Commit 0ceace927c ("Xilinx FPGAs: Eliminate Vivado critical
warnings", 2024-03-08) incorrectly removed the constraints for
shield_io36 through to shield_io44 (due to me applying the wrong
version of a patch), resulting in Vivado giving compile errors when
building for the Arty A7. This restores the constraints.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Some signals have changed names: "eth_" has been dropped from the
names of the MII/GMII/RGMII signals.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This resolves various warnings and critical warnings from Vivado.
In particular, the asynchronous loops in the xilinx hardware RNG were
giving a lot of critical warnings, which proved to be difficult to
suppress, so this instead makes all the xilinx platforms use the
'nonrandom.vhdl' implementation, which always returns an error.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This fixes the following warning:
fetch1.vhdl:293:18⚠️ declaration of "eaa_priv" hides signal "eaa_priv" [-Whide]
variable eaa_priv : std_ulogic;
^
In fact the signal "eaa_priv" is unused, so remove it.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
With ftdiv, we weren't setting result_exp to B.exponent before
testing result_exp in state FTDIV_1; the fix is to transfer B.exponent
to result_exp in state DO_FTDIV.
With ftsqrt, we were setting bit 1 of the destination CR field to 0
always, due to a typo.
Also move a couple of statements around to try to get slightly simpler
logic.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This regenerates the verilog code from upstream litex plus a patch to
generate outputs from the litesdcard module for controlling
bidirectional buffers between the FPGA and SD card.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
In future we will want to support targets using the same vendor but
running at different clock frequencies. Since the clock frequency is
a parameter to the gateware generation process, we now name the target
directories as "vendor.frequency", i.e., "xilinx.100e6" and
"lattice.48e6" rather than "xilinx" and "lattice".
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The flash chip on my board is an ISSI IS26LP256P chip. The ISSI chip
requires slightly different setup for quad mode from the other brands,
but works fine with the existing SPI flash interface logic here.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Renormalization of the divisor for fdiv[s] was adjusting the result
exponent in the wrong direction, making the result smaller in
magnitude than it should be by a power of 2. Fix this by negating
r.shift in the RENORM_B2 state and then subtracting it in the LOOKUP
cycle.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The sign recorded in FPRF was sometimes wrong because we weren't doing
the modifications that were done in pack_dp when setting FPRF (FPSCR
field). These modifications are: set sign for zero result of
subtraction based on rounding mode; negate result for fnmadd/sub;
but don't modify sign of NaNs.
Instead we now do these modifications in the main state machine code
and put the result in an 'rsign' variable that is used to set
v.res_sign, then r.res_sign is used in the next cycle both for setting
FPRF and in the pack_dp functions. That simplifies pack_dp and lets
us get rid of r.res_negate, r.res_subtract and r.res_rmode.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Grep in Fedora 39 has started warning when invoked as 'egrep',
so use grep -E instead to avoid the warnings.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
- Provide next_nia before clock edge where req is asserted
- Set rpn and next_rpn to zero
- There is no longer an input to the icache from the MMU
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Now that we are translating the fetch effective address to real one
cycle earlier, we can use the real address to index the icache array.
This has the benefit that the set size can be larger than a page,
enabling us to configure the icache to be larger without having to
increase its associativity. Previously the set size was limited to
the page size to avoid aliasing problems. Thus for example a 32kB
icache would need to be 8-way associative, resulting in large numbers
of LUTs being used for tag comparisons in FPGA implementations, and
poor timing. With this change, a 32kB icache can be 1 or 2-way
associative, which means deeper and narrower tag and data RAMs and
fewer tag comparators.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This moves the address translation step for instruction fetches one
cycle earlier, so that it now happens in the fetch1 stage. There is
now a 2-entry mini translation cache ("ERAT", or effective to real
address translation cache) which operates on the output of the
multiplexer that selects the instruction address for the next cycle.
The ERAT consists of two effective address registers and two
corresponding real address registers. They store the page number part
of the addresses for a 4kB page size, which is the smallest page size
supported by the architecture.
If the effective address doesn't match either of the EA registers, and
address translation is enabled, then i_out.req goes low for two cycles
while the iTLB is looked up. Experimentally, this delay results in a
0.1% drop in coremark performance; allowing two cycles for the lookup
results in better timing. The result from the iTLB is placed into the
least recently used ERAT entry and then used to translate the address
as normal. If address translation is not enabled then the EA is used
directly as the real address.
The iTLB structure is the same as it was before; direct mapped,
indexed using a hashed EA.
The "fetch failed" signal, which indicates a TLB miss or protection
violation, is now generated in fetch1 and passed through icache.
When it is asserted, fetch1 goes into a stalled state until a PTE
arrives from the MMU (which gets put into both the iTLB and the ERAT),
or an interrupt or redirect occurs.
Any TLB invalidations from the MMU invalidate the whole ERAT.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Now that the icache tag RAM is accessed synchronously, the free tools
recognize it as block RAM on ECP5-based platforms; thus we no longer
need to force it to a very small value.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This uses the next_nia provided to us by fetch1 to enable the icache
tag RAM to be read synchronously (using a clock edge), which should
enable block RAMs to be used on FPGAs rather than LUT RAM or
flip-flops. We define a separate RAM per way to avoid any problems
with the tools trying to inference byte write enables for writing to a
single way.
Since next_nia can move on, we only get one shot at reading it the
cache tag RAM entry for the current access. If it is a miss, then the
state machine will read the cache line from RAM, and we can consider
the access to be a hit once the state machine has brought in the
doubleword we need. The TLB hit/miss check has been modified to check
r.store_tag rather than the tag read from the tag RAM for this case.
However, it is also possible that stall_in will be asserted for the
whole time until the cache line refill is completed. To handle this
case, we remember (in r.stalled_hit) that we detected a hit while
stalled, and use that hit once stall_in is deasserted. This avoids
doing an unnecesary second reload of the same cache line. The
r.stalled_hit flag gets cleared in CLR_TAG state since that is when
cache tags can be overwritten, meaning that a previously detected hit
might no longer be valid.
There is also the case where the tag read from the tag RAM is the one
we are looking for, and is the same index as the line that is starting
to be reloaded by the state machine. If the icache gets stalled for
long enough that the line reload finishes, it would then be possible
for the access to be detected as a hit even though the cache line has
been overwritten. To counter this, we detect the case where the cache
tag RAM entry being read is the same as the entry being written and
set a 'tag_overwrite' flag bit to indicate that one of the tags in
cache_tags_set is no longer valid.
For snooping writes to memory, we have a second read port on the cache
tag RAM. These tags are also read synchronously, so the logic for
clearing cache line valid bits on a snoop has been adjusted (the tag
comparisons and valid bit clearing now happen in the same cycle).
This also simplifies the expression for 'insn' by removing a
dependency on r.hit_valid, fixes the instruction value sent to the
log, and deasserts stall_out when flush_in is true.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Using i_in.next_nia means that we can read the iTLB RAM arrays
synchronously rather than asynchronously, which gives more opportunity
for using block RAMs in FPGA implementations.
The reading is gated by the stall signals because the next_nia can
advance when stalled, but we need the iTLB entry for the instruction
that i_in.nia points to. If we are stalled because of an iTLB miss,
that means we don't see the new iTLB entry when it is written.
Instead we save the new entry directly when it arrives and use it
instead of the values read from the iTLB RAM.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This reduces the number of possible sources for the next NIA from 4
down to 3, by routing interrupt vector addresses through the
r_int.next_nia register, as is already done for reset. This adds one
extra cycle of latency when taking interrupts. During this extra cycle,
i_out.req is 0.
Writeback now no longer combines redirects (branches, rfid, isync)
with interrupts; they are presented separately to fetch1.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds a next_nia field to the Fetch1ToIcacheType record, which
provides an indication of what will be in the nia field on the next
non-stalled cycle. This is intended to be as fast as possible, being
a selection from two redirect addresses (from writeback and decode1)
or an internal register (r_int.next_nia). Reset addresses and
predicted branch targets come through this internal register.
The rearrangement here has the side effect that we can now use the BTC
on the first instruction after a taken branch, whereas previously the
BTC was only active starting with the second instruction after a taken
branch. This provides a slight improvement in performance.
This also fixes a buglet in icache where it would assert its stall
output when i_in.req was false.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This moves the addition that computes the branch target address for
statically predicted taken branches before a clock edge, so the
redirect_nia signal going to fetch1 comes from a clean latch. The
address generation logic is also simplified somewhat, and conditional
absolute branches to negative addresses are no longer predicted taken
(this should have no impact on performance as such branches are
basically never used).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This gets rid of the adder in writeback that computes redirect_nia.
Instead, the main adder in the ALU is used to compute the branch
target for relative branches. We now decode b and bc differently
depending on the AA field, generating INSN_brel, INSN_babs, INSN_bcrel
or INSN_bcabs as appropriate. Each one has a separate entry in the
decode table in decode1; the *rel versions use CIA as the A input.
The bclr/bcctr/bctar and rfid instructions now select ramspr_result
for the main result mux to get the redirect address into
ex1.e.write_data.
For branches which are predicted taken but not actually taken, we need
to redirect to the following instruction. We also need to do that for
isync. We do this in the execute2 stage since whether or not to do it
depends on the branch result. The next_nia computation is moved to
the execute2 stage and comes in via a new leg on the secondary result
multiplexer, making next_nia available ultimately in ex2.e.write_data.
This also means that the next_nia leg of the primary result
multiplexer is gone. Incrementing last_nia by 4 for sc (so that SRR0
points to the following instruction) is also moved to execute2.
Writing CIA+4 to LR was previously done through the main result
multiplexer. Now it comes in explicitly in the ramspr write logic.
Overall this removes the br_offset and abs_br fields and the logic to
add br_offset and next_nia, and one leg of the primary result
multiplexer, at the cost of a few extra control signals between
execute1 and execute2 and some multiplexing for the ramspr write side
and an extra input on the secondary result multiplexer.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
The icache stores a predecoded insn_code value for each instruction,
and so as to fit in 36 bits, omits the primary opcode (the most
significant 6 bits) of each instruction. Previously, for valid
instructions, the primary opcode field of the instruction delivered to
decode1 was a part-representation of the insn_code value rather than
the actual primary opcode. This adds a lookup table to compute the
primary opcode from the insn_code and deliver it in the instruction
words supplied to decode1.
In order that each insn_code can be associated with a single primary
opcode value, the various no-operation instructions with primary
opcode 31 (the reserved no-ops and dss, dst and dstst) have been given
a new insn_code, INSN_rnop, leaving INSN_nop for the preferred no-op
(ori r0,r0,0).
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Log the instruction read from the icache, not the instruction (if any)
being written to the icache.
Fixes: 6db626d245 ("icache: Log 36 bits of instruction rather than 32")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
This adds a new type of stop trigger for the log buffer which triggers
when any byte(s) of a specified doubleword of memory are written.
The trigger logic snoops the wishbone for writes to the address
specified and stops the log 256 cycles later (same as for the
instruction fetch address trigger). The trigger address is a real
address and sees DMA writes from devices as well as stores done by the
CPU.
The mw_debug command has a new 'mtrig' subcommand to set the trigger
and query its state.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
With this, the logic that maintains r1.acks_pending operates in every
state based on r1.wb and wishbone_in, rather than only operating in
STORE_WAIT_ACK state. This makes things a bit clearer and improves
timing slightly.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>