Computers have come a long way thanks to companies like Microsoft, Apple and IBM, but Elon Musk has advice for the developers of the world's latest and greatest: tread lightly.

Musk, the billionaire owner of SpaceX, wrote online that AI could be "potentially be more dangerous than nukes." In a series of recent tweets, Musk referenced two books - "Superintelligence" by Nick Bostrom and "Our Final Invention" by James Barrat - to support his statement.

"Superintelligence" is due out Sept. 3 and, according to its description on Amazon, asks, "What happens when machines surpass humans in general intelligence?"

Musk raised such concerns on CNBC's "Closing Bell" in June and also took steps financially to monitor the market.

"There's some scary outcomes and we should try to make sure the outcomes are good, not bad," he said.

The Tesla and PayPal owner also said he invested in Vicarious, an AI company, in order to "keep an eye on what's going on with artificial intelligence."

Barrat's book, "Out Final Invention," echoed these concerns, which have also been raised in the past by noted scholar Stephen Hawking.

"In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail-human-level intelligence," reads the Amazon description of Barrat's book. "Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine."

Roger McNamee, a venture capitalist, musician and co-founder of Elevation Partners, disagreed that AI poses a potential threat. He said current events are enough of a cause for concern.

"If you want something to worry about, just read the newspaper," he told CNBC's "Squawk Alley." "There's a good chance we will have polluted the Earth beyond repair before they get any of this AI stuff to work."