03-23-2017, 04:56 AM
Obviously Dadda trees are not the most efficient; it's for the challenge, like bothering with Kogge-Stone or other CLA when instacarry and CLE exist.
As for neural networks, conceptually they are quite simple. Sum the inputs, apply an activation function, and output to the next layer. Something like that. The real challenge is learning, with either backpropagation or the way I implemented it in a small python project, using genetic algorithm for evolution. I'm only a beginner in the field but it's pretty relevant to my desired major so in one or two years I'll be much more knowledgeable.
The sorting network idea I did come across briefly but I didn't realize the connection to mergesort. So that is pretty interesting. Also cool is that they are also called "comparator networks", which is suggestive.
It's good to see people still working on new things. I think there's a lot of room for optimization when it comes to Dadda / Wallace trees.
As for neural networks, conceptually they are quite simple. Sum the inputs, apply an activation function, and output to the next layer. Something like that. The real challenge is learning, with either backpropagation or the way I implemented it in a small python project, using genetic algorithm for evolution. I'm only a beginner in the field but it's pretty relevant to my desired major so in one or two years I'll be much more knowledgeable.
The sorting network idea I did come across briefly but I didn't realize the connection to mergesort. So that is pretty interesting. Also cool is that they are also called "comparator networks", which is suggestive.
It's good to see people still working on new things. I think there's a lot of room for optimization when it comes to Dadda / Wallace trees.