His argument seems to be that advances in AI will mean that software development itself is soon automated away, in which case a CS degree will be useless. Then, he advocates studying philosophy because this future will need "big picture" thinkers.<p>I don't know how much substance there is to discuss here. It pretty much hinges on how you think AI will develop over the coming decades. I am skeptical that software development will be automated away anytime soon, especially since the actual writing of code is just one part of the job. I recall an Economist article several years back arguing that jobs are less automatable the more varied they are. For this reason custodians should be among the last to go, the logic being that a given custodian does so many different maintenance tasks that it would be impractical for a bot to replace them.<p>Software development seems similar. You're constantly pivoting between different and shifting problems. Most of the impressive AI demonstrations like image recognition or playing Go are really about getting good at a single well-defined task. We seem to be <i>very</i> far away from the kind of adaptibility required to automate away developers (though perhaps the CS degree will devalue as more people pursue it).<p>It's also unclear to me how exactly philosophy is better preparation. My own experience with academic philosophy is limited, but from what I can tell its main selling point is that it teaches you to examine things in a critical way that's very aware of the argument in question, the assumptions made, and the structure the argument sits in. My experience in math (which I pursued much more) was similar, and I imagine that most good CS programs teach similar skills?<p>"Study philosophy" here seems like a catchy (and perhaps poor) proxy for "make sure you can reason about stuff outside a narrow programming bubble". But that's always been good advice.
Firstly, AI is currently massively overhyped and people are overestimating the progress we are making. What we are currently doing is curve fitting and searching on steroids enabled by the ever increasing availability of computing power, what we know about general intelligence hasn't changed much in a long time, at least as far as I know.<p>I won't rule out that we may build an AGI by just throwing enough computing power at it and simply simulating brains with enough fidelity but without actually understanding how it works, at least not at first, but I really don't see much - if any - progress towards building an AGI because we know what we have to do. Maybe the secret of general intelligence is just curve fitting and searching at massive scale and we fail to find the secret behind it because there is none, but we really don't know.<p>That of course doesn't mean that the recent developments are not interesting or don't have useful applications, they are just not as groundbreaking as often portrayed and we probably still have a rather long way to go.<p>With regards to big picture thinking and general problem solving ability, I don't think you are in a particularly bad spot as a software developer or hacker. Especially not if you are doing project work or frequently change jobs and regularly have to get familiar with new business domains. Also if your focus is more at the design and architecture end of the spectrum as compared to coming up with beautiful looking CSS.<p>But I would certainly agree that philosophers have some ways of thinking that are at least to some degree unlike what you find in software developers, but I can't really pin it down. I only know it exists because I am frequently somewhat surprised by the thoughts and ideas that come up in philosophy and they seem not like things I would think of. Maybe it is just the level to which philosophers dissect things and which goes beyond what is required in software development but I really can't tell for sure.
For some reason, despite him being extremely well-accomplished, I just can't take any advice he gives seriously, compared to Bill Gates or even Ray Dalio.
I studied philosophy in College but got my CS degree. I think some guy called Paul Graham did so too. I think philosophy is useful for detecting if someone is lying to you or doesn't know what they are talking about. It does stress critical thinking in different ways than say studying math (I also studied math also).<p>I can't really take what he is saying though at any serious level though and I don't believe what he is saying at all. I think that this story is to grab headlines.
> Cuban advises ditching degrees that teach specific skills or professions and opting for degrees that teach you to think in a big picture way, like philosophy.<p>I don't think my computer science degree taught me 'specific skills' but instead taught me how to learn. How to pick up new concepts in a short period of time. I think does not fully understand what a computer science degree is or how artificial intelligence works.
Personally I think that computer programming AIs that can really replace humans (beyond just programming automation) will need to be AGIs. Those AGIs will be able to do philosophy just as well as they can program.<p>I also personally believe that we currently have most if not all of the ingredients necessary for AGI. My prediction is that in 2018 or 2019 we will see public demonstrations of AGI. These initial demos will be underwhelming and not have the capacity of humans or even animals -- but nonetheless will be truly general intelligences with significant human- and animal-like abilities.
It will take another few years before the technology is widely recognized to be human-equivalent or better, but the initial systems will be trained in numerous fields quickly. There will be very powerful general systems available by 2021.<p>Some of the pieces necessary for general intelligence as I see them are truly general purpose inputs and outputs (like a body or virtual body), highly efficient processing of high bandwidth input and output, fast online (immediate) learning, hierarchical learning and computation, the ability to learn and recombine flexible sequences and time-based data. Advanced neural net-based systems, especially ones that are inspired by animal cognition systems, are providing all of these features. Its mainly a matter of integrating existing leading edge neural-net research with a good understanding of AGI.
Utter non-sense for two reasons.<p>First, he is reversing cause and effect in that AI is a philosophic problem (more specifically an epistemological problem). The philosophy departments have been doubting the efficacy of reason for over a century now, a process that started with Descartes, advanced by Hume and cashed-in by Kant to save faith from reason (i.e. subjectivism). If modern philosophy had been actually working to validate reason and explain how it works we would already have AI. AI is on hold until someone explains how reason works, and more specifically concepts, in enough detail so that we can program a machine to do it.[1]<p>Second, if you signed up for philosophy courses or a PhD today, perhaps you would learn how to be a more critical thinker but the danger is you would be infected with navel-gazing skepticism and impotence of reason and thus wreck your ability to think, big or small picture. If you are really talented you can get tenure writing essays destroying the work of your colleagues without generating a shred of original work.[2]<p>[1] Someone from the early work on computers said (I paraphrase); "Show me what the mind is doing and I can make a machine that can think". I want to say von Neumann but I've never found a source for that statement. If anyone knows who said it and where I would be very grateful for a reference.<p>[2] LBJ: “Any jackass can kick down a barn but it takes a good carpenter to build one."
Many people think the current buzz in AI is about intelligent machines instead of highly repeatable pattern recognition. I suspect we are several major advancements away from sentient machines and in a few years the buzz over ML will die down as we realize that it isn't much more than a great at sorting inputs.
I personally have taken myself to learn more about psychology. I've already spent most of my life learning about machines and how to control them. It's time to learn more about people.
"We don't need people to engineer buildings any more, because we have architects and city planners."<p>I wouldn't expect a philosophy major to leapfrog into a similar compensation package as a competent engineer. At best they're going to be like PMs with domain knowledge, and working experience building software will be required for folks on product teams. Just another case of individual with cross-discipline expertise being more valuable than ones with narrower focus, which is nothing new in the workforce.
Reminds me of what an ex colleague used to say: Every scientist is a philosopher, but not every philosopher is a scientist.<p>Let me explain: Philosophers love questions and problems, but the difference with scientists is that the latter like to find answers to those questions and solutions to those problems. It seems in practice, that philosophers like the questions more than the answers, and like them more to be "unsolved".
Why do people keep quoting Mark Cuban? He got lucky by selling his company to Yahoo just before the dotcom crash. I don't know any other worthy thing he's done since then. I really wish reporters weren't so lazy and just keep parroting things from people that they've heard of without qualifying how useful or effective the source is.
While I agree someday, (more likely 40+ years from now) AI might be general enough to code new/better ai... I don't think philosophy will be the goto college degree...<p>I think the arts will possibly, because if AI is doing all the work, building the industries, and technologies we want then I feel science and entertainment will be two things ai can't do.. Science because it takes human curiosity to even know what we want to know or search, and the whole point of knowing is curiosity -- I think science will exist for awhile, albeit augmented fiercely by ai.<p>Entertainment because if there are no jobs as accountants, lawyers, or doctors, or any other jobs if--we end up in post-scarcity one thing we will need is something to do in all our free time, that something will be binging on netflix and social media 24/7.
> Cuban advises ditching degrees that teach specific skills or professions and opting for degrees that teach you to think in a big picture way, like philosophy.<p>I'm not sure if big picture thinking skills come from studying one specific field like philosophy by itself. I'd expect interdisciplinary thinking and life experiences to be more important.<p>Examples: meet many different kinds of people from around the world who challenge your preconceived ideas, and listen with an open mind. Work many different kinds of jobs (or volunteer). Approach learning from an interdisciplinary perspective and study multiple fields.<p>Studying philosophy isn't a bad idea, but I don't think that one field by itself is the solution.
At the least, we'll need some kind of overseer for what these "AI"s are doing. These people will need strong analytical abilities, and the ability to deal with various levels of abstraction. Something that computer science people surely have.<p>Yeah, not much good content here, but it is probably useful to know what people like him are thinking (the new American oligarchs?). If, in fact, he is saying what he actually thinks. The only thing we can be sure of, is that he and his people are scared of AI.
If computer science is worthless, the rest of the jobs will be as well. CS is about solving problems in (almost) its purest form. If AI can do that, humans are done.
stop listening to people just because they are rich. i dont even mind this guy that much, but what evidence is there that this guy knows what he is talking about?
Why philosophy in particular, though?<p>I am sympathetic with worries about how automation will affect the job market, and I love philosophy -- but I saw no explanation in this article for why Mark Cuban believes <i>philosophy</i> in particular will be more valuable.
He <i>may</i> not be wrong. Check out Getting to Philosophy on wikipedia. [0]<p>"There have been some theories on this phenomenon, with the most prevalent being the tendency for Wikipedia pages to move up a "classification chain." According to this theory, the Wikipedia Manual of Style guidelines on how to write the lead section of an article recommend that the article should start by defining the topic of the article, so that the first link of each page will naturally take the reader into a broader subject, eventually ending in wide-reaching pages such as Mathematics, Science, Language, and of course, Philosophy, nicknamed the "mother of all sciences"."<p>[0] <a href="https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy" rel="nofollow">https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosoph...</a>
I think it's far more likely that if this process takes place, CS degrees will morph more towards areas like philosophy and mathematics than software engineering.<p>I doubt they're going to be teaching the exact same way in 30 years.
“In 10 years, a liberal arts degree in philosophy will be worth more than a traditional programming degree."<p>Dang I couldn’t think of a less qualified guy to make that call!<p>And I love philosophy!
Hard to take anything this guy says seriously. Ever since he announced that he would be happy to run for vice president without care whether it was Clinton or Trump I lost interest in whatever he said. He said this after regularly blasting Trump as not qualified for being president. He blasted Bitcoin then he decided it was a must-have. Previously he blasted the stock market as a gambling den. And I'm sure there are more examples. He loves publicity and the press loves to print whatever BS he's saying today. I'm sure tomorrow it will be something else. Mark Cuban shut the F--- up!!
I flagged this for being unsubstantiated scarebait.<p>> That's because Cuban expects artificial intelligence technology to vastly change the job market, and he anticipates that eventually technology will become so smart it can program itself.<p>> "What is happening now with artificial intelligence is we'll start automating automation," Cuban tells AOL. "Artificial intelligence won't need you or I to do it, it will be able to figure out itself how to automate [tasks] over the next 10 to 15 years.<p>This is coming from a guy who doesn't know a lick of programming, let alone anything about AI. In fact, he mainly dabbles in sports and Shark Tank. Yet another "you luddites are all going to be replaced, so you better piss your pants" article, and all it's based on is this guy's gut feeling. Silicon Valley needs to chill the fuck out with overhyping AI, because it's going to backfire on them substantially when people start realizing they can't back up their promises.