NoProp is an innovative approach that allows the training of neural networks without relying on traditional back-propagation or forward-propagation methods. This technique could potentially change how neural networks are trained, opening doors for new architectures and optimizations. Early discussions suggest it might offer computational advantages, especially in cases where gradient descent methods face limitations. The analogy drawn with unrolling a diffusion model indicates that this approach could serve as a novel alternative to standard methods like Backpropagation Through Time (BPTT) used in recurrent neural networks, suggesting a shift in how temporal dependencies are managed in training. This might address scalability and efficiency concerns in deep learning, but the community appears to be cautiously optimistic and is exploring its ramifications. As researchers delve into this topic, it could lead to breakthroughs that redefine model training in various applications.