Oh, hey, it's a generic package that means you lose type safety and saves you from writing roughly 10 lines of code.<p>Buffered channels + goroutines + WaitGroup already allow you to implement this trivially, and because channels are builtin generic, you can do it without the nasty type casts. Really, []interface{} is a terrible type to work with.
Just yesterday I was staring at some of my go code thinking that channels are the "goto" of concurrency. You can make just about anything with them, but to understand code you have to read it all, hold it in your head, and reason about all possible outcomes. In the '60s that's how flow of control was done. As the '70s went by "structured programming" came in and exotic things like <i>while</i> loops, <i>switch</i> statements, and functions that you could only enter at the top (so limiting!) became the norm.<p>This post proposes a level of abstraction to take a common 10 line idiom and abstract it to a word. I'd much rather read code with the abstraction. (In this case it is clean to read, but there are many complicated patterns in common use involving auxiliary chans for cancellation and timeouts.) Sadly, this is where it collides with the go language designers. Go is anti-abstraction by design. If you don't like that then you descend into <i>interface{}</i> hell and manual runtime type checking, or change languages, or just repeat yourself a lot and pray you get the fiddly bits right each time.
I wrote a similar Go package for running work loads in parallel, but I used beanstalkd for job/result transport. This allows me a bit more freedom to spread the workers/requesters across my network. It's a bit rough around the edges and could use some refactoring, but it works well for my uses.<p><a href="https://github.com/eramus/worker" rel="nofollow">https://github.com/eramus/worker</a>
Seems to be way too trivial for a library: <a href="http://play.golang.org/p/U0URukpeO3" rel="nofollow">http://play.golang.org/p/U0URukpeO3</a><p>Although I like the idea of having some kind of API to spread the work across multiple machines, since channels are slow anyway.
>How does it make the program run faster?<p>Sunfmin, wrong answer, you'd have a 20x improvement with 20 workers only with 20 cores (ignoring the limited overhead), regardless of the number of workers your queue of jobs will be consumed in number_of_jobs*single_job_duration/GOMAXPROCS.
Write yourself the 20 lines needed before pulling an extra 3rd party deps just for that.<p>The cost of having another 3rd party deps is greater than the seconds you save from using this.
I'm not sure what the usefulness of this is (apart from saving you from writing one or two functions yourself), isn't it just a parallel queue?