vhdl - What is the practical difference between implementing FOR-LOOP and FOR-GENERATE? When is it better to use one over the other? -
let's suppose have test different bits on std_logic_vector. better implement 1 single process, for-loops each bit or instantiate 'n' processes using for-generate on each process tests 1 bit?
for-loop
my_process: process(clk, reset) begin if rising_edge (clk) if reset = '1' --init stuff else for_loop: in 0 n loop test_array_bit(i); end loop; end if; end if; end process;
for-generate
for_generate: in 0 n generate begin my_process: process(clk, reset) begin if rising_edge (clk) if reset = '1' --init stuff else test_array_bit(i); end if; end if; end process; end generate;
what impact on fpga , asic implementations cases? easy cad tools deal with?
edit: adding response gave 1 helping guy, make question more clear:
for instance, when ran piece of code using for-loops on ise, synthesis summary gave me fair result, taking long while compute everything. when re-coded design, time using for-generate, , several processes, used bit more area, tool able compute way way faster , timing result better well. so, imply on rule, better use for-generates cost of area , lower complexity or 1 of cases have verify every single implementation possibility?
assuming relatively simple logic in reset , test functions (for example, no interactions between adjacent bits) have expected both generate same logic.
understand since entire for
loop executed in single clock cycle, synthesis unroll , generate separate instance of test_array_bit
each input bit. therefore quite possible synthesis tools generate identical logic both versions - @ least in simple example.
and on basis, (marginally) prefer for ... loop
version because localises program logic, whereas "generate" version globalises it, placing outside process
boilerplate. if find loop
version easier read, agree @ level.
however doesn't pay dogmatic style, , experiment illustrates : loop
synthesises inferior hardware. synthesis tools complex , imperfect pieces of software, highly optimising compilers, , share many of same issues. miss "obvious" optimisation, , make complex optimisation (e.g. in software) runs slower because increased size trashed cache.
so it's preferable write in cleanest style can, flexibility working around tool limitations , real tool defects.
different versions of tools remove (and introduce) such defects. may find ise's "use new parser" option (for pre-spartan-6 parts) or vivado or synplicity right ise's older parser doesn't. (for example, passing signals out of procedures, older ise versions had serious bugs).
it might instructive modify example , see if synthesis can "get right" (produce same hardware) simplest case, , re-introduce complexity until find construct fails.
if discover concrete way, it's worth reporting here (by answering own question). xilinx used encourage reporting such defects via webcase system; fixed! seem have stopped that, however, in last year or two.
Comments
Post a Comment