Before I start: I know this forum isn't a general Verilog help forum, but it's full of knowledgeable and friendly people, so I'm still going to ask my question....
I'm still very much a beginner, and to give me focus for learning I've set myself a goal of creating an equivalent of the Arduino FastLED library for creating visual effects on a string of WS2812 pixels. (To actually drive the string I've used a bit of code I found on the web somewhere - it seems to work fine).
Anyhow, all is functionally going well - so far I've been able to create the effects desired, but I'm a bit worried about my code efficiency... I haven't finished yet and a single instance of my code driving a single string of LEDs takes almost 25% of the logic cells on the MyStorm Mx board:
Info: Device utilisation:
Info: ICESTORM_LC: 1843/ 7680 23%
Info: ICESTORM_RAM: 0/ 32 0%
Info: SB_IO: 6/ 256 2%
Info: SB_GB: 6/ 8 75%
Info: ICESTORM_PLL: 1/ 2 50%
Info: SB_WARMBOOT: 0/ 1 0%
Would anymore care to cast an eye on my code and provide suggestions about how I could achieve the same thing more efficiently...
(The general design concept of the controller is that it is supplied with a LED index by the WS2812 driver, together with various control channels, and it must output the RGB values back to the WS2812 driver. Within my controller I've allowed 32 clock cycles to produce an output, so the pixel output calculation is already staged. Currently I'm only using 12 of those cycles, but I've left myself plenty of headroom to add more calculation stages in. R, G and B values are all calculated sequentially to reduce the number of multipliers by 3).
One of the recent additions that seems to have cost me a lot of logic cells is the addition of the 'glitter' type effect. To achieve this I added a 256 bit register that effectively stores a glitter flag per pixel. Obviously I knew this would take at least 256 logic cells, however it seems to have taken more like 1000 logic cells and I don't know why? Would I have been better off using BRAM for this?