I'm a medical resident. I see a patient in clinic, do a history and physical, sit down at a computer, and write a note describing <i>what I just did.</i> That's medical documentation, and it's a burdensome, growing problem.<p>The solution would be some combination of video or audio that records the clinical encounter and automatically generates a note based on what was discussed and performed. It falls under Paul Graham's "Schlep Task" (<a href="http://www.paulgraham.com/schlep.html);" rel="nofollow">http://www.paulgraham.com/schlep.html);</a> you'd have to work with individual clinicians, get their (and the patient's!) approval to record the encounter, record it from multiple angles, build tech into dumb devices (e.g. stethoscope, Dopplers for pulses, O2 saturations, and somehow use machine learning to integrate all that sensor data into a plain-text note.<p>It's probably the #1 problem from a provider's viewpoint right now, especially on the primary care side. Any individual provider will see 10-20 patients per day and write just as many notes.<p>Edit: I'd add that you'd <i>also</i> have to have access to existing notes because progress notes typically summarize the patient's prior visits.
It runs on my laptop and phone and doesn't phone home. Example: I download a trained network, it runs locally (maybe on some special purpose hw), discovers remote resources (example train, metro, bus timetables) and tells me the shortest route to destination without any company knowing where I'm going to.<p>This is a silly example because we don't need an AI for that but it's late here and I should've given the idea. No centralized architecture, no spying, no tracking. I'll pay for that.
Currently, we are focusing on very narrow field of Artificial Intelligence. Not that it is a bad thing, but we should not call it "AI". We are mostly solving the problem of Data Analysis at Scale, which again, for businesses is the most logical thing to do. I do not doubt that in the future, data analysis will be an important jigsaw part in a true AI system.<p>Currently, I look forward to/want to see more research/movement in other ideas/fields/methods which peep into AGI. Maybe Probabilistic (Quantum? or Neuromorphic?) Computing? Maybe Artificial Life? Maybe Cognitive Architectures? Recap into Symbolic?<p>PS: This is not a popular opinion, but I do want to share it with fellow HNers. We do most of the computing to solve the hard problems in our society and trade, and that is surely very noble. Having said that, When I was growing up, I used to see/feel a lot of developments in Computers happened for the sake of Computing. It felt like hacking was all together different field; Linux, the all the Free Software movement, Windows NT, Doom, Quake, Huge advances in Compilers, the whole culture around it etc.<p>Today I mostly see computing that works to advance and solve real world business problems like better advertisements or helping with other noble fields like astrophysics and genetics. I consider it as an advancement of our tech society, but as immature as I am, I miss that time.<p>Anyway, I would love to see AI for the sake of AI.
AI that looks at all the medical research in the world, and helps people get the most accurate medical diagnosis and treatment.<p>Ah, and at the same time, why not create machines that automatically,creatively and cheaply do biomedical r&d ? Or AI that accelerates innovation around technologies that has an health impact ?
I want to see much better human in the loop systems in general. A simple example is systems that will allow users to tweak the output of creative/generative systems. E.g. Neural style transfer methods that allow the user to easily adjust the output with human centric feedback. I want to say "good start, but make the background more like Caravaggio would than it is now, and lighten up the foreground to be more playful" and have the algorithm adjust the output accordingly.<p>We aren't impossibly far off from this, but not that close yet either.
As with everything in computing, what I want is more capability that runs on my own devices—my own compute servers, my workstation, my laptop, or my mobile device. Every application or service that has to interact with third-party servers ("the cloud") to provide its functionality is, to my mind, defective or at least deficient of the ideal.<p>AI is no exception. I have swarms of CPU cores in my life, the totality of which could easily accomplish all of the "AI" functions of any value that I've observed in my life and far more. Today's AI that works for me (rather than against me) provides things like voice recognition, calendar management, reminders, to-do list management, and route planning. None of these functions requires even a small fraction of my personal CPU core armada, which are idle for 99% of their clock cycles.<p>Today's all-too-popular refrain of leveraging the cloud to provide sufficient compute capacity for these tasks is disingenuous and is too often accepted as truth when it's just cover for data exfiltration.
I can't wait for the people building and deploying AI / Machine Learning tools on a large scale to become aware of the broad societal implications of what they are helping to do. Or, if they are, impress that knowledge loudly and clearly onto their peers.<p>Not change, or stop nessesarily. Just be aware. This is going to be one of the enablers for the biggest "behind the scenes" changes in how we work as a society in our generation. As Devs, we should take some responsibility for how we act in that regard. I don't see enough of that attitude at the moment.
Maybe not the next big advance, but I'll be truly impressed when an AI system with no game-specific prior knowledge is able to complete a game of Pokémon Yellow within a reasonable amount of game time.<p>It requires:<p>- Natural language understanding (choosing the right dialog options)<p>- Interpretation of visual cues (navigating terrain and buildings)<p>- Hierarchical planning (training certain Pokémon to relatively high levels rather than a scattershot approach)<p>- Puzzle solving (for gym access)<p>and quite a bit more.
I want robust, packaged setups (docker, AWS, whatever) to be cheap/free, accessible, and work out of the box with no end-user setup.<p>I am getting into DL myself and very excited about the potential, but I have spent literally 5 hours on setup (python and DL-specific AWS Ubunut instance) and have gotten exactly nowhere. Version conflicts, iPython not working in venv, dependency mismatches where many people built demos I want to use on python 2.7 that don't work with my 3.5 setup, can't get matplotlib to display graphs over X11/remote ssh, etc, etc, etc. So frustrating.<p>I want programmers to be able to one-button setup a DL machine so they can start tinkering and not wasting time on setup/dependencies/bugs.
Personally I'd love to talk to a bot that's mastered and memorized many books and all of philosophy and science and is also researching philosophy. Imagine getting his/her/their opinion on all the subtle questions life throws at you.
With the research directions and methodology that dominate AI now, I don't look forward to anything significantly new or better.<p>To me, the core technology of my startup looks better than anything I've seen in AI, but my technology is from some of my original research in applied math based on some advanced pure/applied math prerequisites. I'm not calling my work AI, but maybe some AI people would.<p>E.g., my work does well with the <i>meaning</i> of content. Just how my applied math does that has nothing to do with anything I've seen in current AI.<p>My view is that the current directions of AI are essentially out of gas -- there will continue to be new applications of the existing techniques, but new techniques are not promising.<p>IMHO, for new techniques for AI we need to do much better with (A) applied math and/or (B) implementations of relatively direct borrowing from natural intelligence, e.g., at the level of a kitty cat and working up from there.<p>E.g., for the math, stochastic optimal control can look brilliant, even prescient, and has had the basics on the shelves of the research libraries for decades.
My dream: AI applied to <i>scientific research</i>.<p>For years now, if not decades, we've been creating science - as measured in publications - <i>much</i> faster than any human being, or group of humans, can keep up with. Granted, lots of those papers are probably bogus, but then you don't know which is which until you actually sit down and read it...<p>I believe we're missing <i>tons</i> of insights and discoveries that we have all the necessary components for making, and it would only take a smart person to read the <i>right</i> three papers to connect the dots. Alas, chances that any human will do that are fast approaching zero. I think the only way to tackle this is to employ computation.<p>What I would love to see, therefore, is an AI system that would a) filter out bogus/bullshit papers and mark shoddy research that probably needs to be redone, and b) mine the remaining ones for connections, correlations, discovering which research supports which and which research contradicts each other.
ML that helps me learn faster as a human. Let's be honest, watching videos online is suboptimal. The middle-ages method of listening to a professor in a large room is suboptimal.<p>Can you personalize what I learn? Can you find the exact best explanation for my current level by looking at my facial expression?
AI-NET. Specific AI that can communicate with other specialized AIs using a simple protocol to solve specific problems in their field with the help of others.<p>AIs don't need to know everything but they must know where everything is and must know how to get that info. Sort of a DNS for intelligent repos and APIs to access them. Wolphram Alpha would be one but we need more, like medicine silos, agriculture silos, wikipedia, facebook, etc should have their own silos with AI interfaces.<p>Then Google AI would help us search for silos of interest.
An AI that can watch videos:<p>- automated security guard, especially in sensitive areas like public restrooms<p>- watching and editing hours of stock footage for a good summary, or the highlights<p>- automated refereeing in sports
I'm excited for self-driving cars and ML/AI tools that help professionals (e.g., drug discovery, treatment).<p>The Keras lead developer wrote a post called "The future of deep learning"[1]. The podcast Partially Derivative has a good summary of it[2].<p>On the implementation side, I'm looking forward to distribution. Computers are all around us, sitting by idly – how can we put them to use? How can we make them secure? We're starting to see integration of ML models into mobile systems (with Apple's .mlmodel and ARKit).<p>[1]:<a href="https://blog.keras.io/the-future-of-deep-learning.html" rel="nofollow">https://blog.keras.io/the-future-of-deep-learning.html</a><p>[2]:<a href="http://partiallyderivative.com/podcast/2017/07/25/the_future_of_deep_learning" rel="nofollow">http://partiallyderivative.com/podcast/2017/07/25/the_future...</a><p>[3]:<a href="http://papers.nips.cc/paper/4390-hogwild-a-lock-free-approach-to-parallelizing-stochastic-gradient-descent" rel="nofollow">http://papers.nips.cc/paper/4390-hogwild-a-lock-free-approac...</a>
- Dumbing down the implementation ... Most tools like tf, keras, pytorch are heading that way .. but still there is a bit of expectation from engineer to understand insides.. the future of AI depends on ability of every engineer to use the techniques without the learning curve that is right now.<p>Please don't suggest - use of APIs
they ain't cost effective and the future lies in ability to tweak things on your own.<p>- Also availability of DL libraries across different languages and stacks. Yes there are methods but they need an extra effort to get working. At this moment to learn AI you first need to either learn python, R or matlab.<p>The future of AI lies in its applications. And it can only happen through experimentation. If you come across a possible application, it shouldn't take you an year to really start experimenting.
Given the same algorithm implemented in any languages, the AI should be able to compile(translate) every program to the same highly optimized machine code or at least the same performance, learning only from sources of other programs and their resulting compiled code.
I'm looking forward to seeing the movement of software from our computers to the third dimension in our daily lives, especially with robots in the workplace. Computer Vision used to be the limiting factor, and now it's not. Especially for things like science (eg. opentrons), the concept of brute force and massively parallel experimentation & data collection will be a huge game changer for discovery. There are areas in chemistry and materials science alone where modest improvements to existing processes would have such massive impacts to our society, we probably couldn't even estimate the resulting benefits to their fullest extent.
Question to VCs on HN: Is AI declining? Not that it is dying down but my feeling is that the general investment activity in AI slowed down but maybe I am wrong.<p>1. Do you get still that much AI deals in your inbox?<p>2. Do you close this quarter as much deals as one year ago?
GAN improvements that lead towards finding global minima for any given distribution mapping. GANs are versatile but still suffer from mode collapse and instability, drastically reducing their utility.
To be honest - games. AI that could write a story by itself and characters that would interact based on AI and not some state machines would be amazing experience.
Big advance? AI than can learn conditional logic.<p>DL tops out as mapping vector spaces to less complicated vectors. Incredibly powerful, but incredibly limited in what problems it can emulate.
Distributed and decentralized AI a la things like Synapse <a href="https://synapse.ai/" rel="nofollow">https://synapse.ai/</a>
Not sure if it will happen, but I'd love to see life after back propagation (1). In particular, I'd love to see advances in unsupervised learning, since the amount of labeled data is a huge limit on what AI can currently do.<p><a href="https://www.axios.com/ai-pioneer-advocates-starting-over-2485537027.html" rel="nofollow">https://www.axios.com/ai-pioneer-advocates-starting-over-248...</a>