Wednesday, February 17, 2010

King Tut Was Disabled, Malarial, and Inbred, DNA Shows



King Tut may be seen as the golden boy of ancient Egypt today, but during his reign, Tutankhamun wasn't exactly a strapping sun god.
Instead, a new DNA study says, King Tut was a frail pharaoh, beset by malaria and a bone disorder—and possibly compromised by his newly discovered incestuous origins. 
The report is the first DNA study ever conducted with ancient Egyptian royal mummies. It apparently solves several mysteries surrounding King Tut, including how he died and who his parents were.
"He was not a very strong pharaoh. He was not riding the chariots," said study team member Carsten Pusch, a geneticist at Germany's University of Tübingen. "Picture instead a frail, weak boy who had a bit of a club foot and who needed a cane to walk."
Regarding the revelation that King Tut's mother and father were brother and sister, Pusch said, "Inbreeding is not an advantage for biological or genetic fitness. Normally the health and immune system are reduced and malformations increase," he said.
Tutankhamun was a pharaoh during ancient Egypt's New Kingdom era, about 3,300 years ago. He ascended to the throne at the age of 9 but ruled for only ten years before dying at 19 around 1324 B.C
Despite his brief reign, King Tut is perhaps Egypt's best known pharaoh because of the wealth of treasures—including a solid gold death mask—found during the surprise discovery of his intact tomb in 1922. 
The new study, published this week in the Journal of the American Medical Association, marks the first time the Egyptian government has allowed genetic studies to be performed using royal mummies.
"This will open to us a new era," said project leader Zahi Hawass, the Secretary General of Egypt's Supreme Council of Antiquities (SCA) and a National Geographic Explorer-in-Residence. 
"I'm very happy this is an Egyptian project, and I'm very proud of the work that we did."
In the new study, the mummies of King Tut and ten other royals that researchers have long suspected were his close relatives were examined. Of these ten, the identities of only three had been known for certain.
Using DNA samples taken from the mummies' bones, the scientists were able to create a five-generation family tree for the boy pharaoh.
The team looked for shared genetic sequences in the Y chromosome—a bundle of DNA passed only from father to son—to identify King Tut's male ancestors. The researchers then determined parentage for the mummies by looking for signs that a mummy's genes are a blend of a specific couple's DNA.
In this way, the team was able to determine that a mummy known until now as KV55 is the "heretic king" Akhenaten—and that he was King Tut's father. Akhenaten was best known for abolishing ancient Egypt's pantheon in favor of worshipping only one god.
Furthermore, the mummy known as KV35 was King Tut's grandfather, the pharaoh Amenhotep III, whose reign was marked by unprecedented prosperity.
Preliminary DNA evidence also indicates that two stillborn fetuses entombed with King Tut when he died were daughters whom he likely fathered with his chief queen Ankhensenamun, whose mummy may also have finally been identified. 
Also, a mummy previously known as the Elder Lady is Queen Tiye, King Tut's grandmother and wife of Amenhotep III.
King Tut's mother is a mummy researchers had been calling the Younger Lady.
While the body of King Tut's mother has finally been revealed, her identity remains a mystery. DNA studies show that she was the daughter of Amenhotep III and Tiye and thus was the full sister of her husband, Akhenaten.
Some Egyptologists have speculated that King Tut's mother was Akhenaten's chief wife, Queen Nefertiti—made famous by an iconic bust (Nefertiti-bust picture). But the new findings seem to challenge this idea, because historical records do not indicate that Nefertiti and Akhenaten were related.
Instead, the sister with whom Akenhaten fathered King Tut may have been a minor wife or concubine, which would not have been unusual, said Willeke Wendrich, a UCLA Egyptologist who was not involved in the study.
"Egyptian pharaohs had multiple wives, and often multiple sons who would potentially compete for the throne after the death of their father," Wendrich said.
Inbreeding would also not have been considered unusual among Egyptian royalty of the time.
The team's examination of King Tut's body also revealed previously unknown deformations in the king's left foot, caused by the necrosis, or death, of bone tissue.
"Necrosis is always bad, because it means you have dying organic matter inside your body," study team member Pusch told National Geographic News.
The affliction would have been painful and forced King Tut to walk with a cane—many of which were found in his tomb—but it would not have been life threatening.
Malaria, however, would have been a serious danger.
The scientists found DNA from the mosquito-borne parasite that causes malaria in the young pharaoh's body—the oldest known genetic proof of the disease.
The team found more than one strain of malaria parasite, indicating that King Tut caught multiple malarial infections during his life. The strains belong to the parasite responsible for malaria tropica, the most virulent and deadly form of the disease.
The malaria would have weakened King Tut's immune system and interfered with the healing of his foot. These factors, combined with the fracture in his left thighbone, which scientists had discovered in 2005, may have ultimately been what killed the young king, the authors write.
Until now the best guesses as to how King Tut died have included a hunting accident, a blood infection, a blow to the head, and poisoning.
UCLA's Wendrich said the new finding "lays to rest the completely baseless theories about the murder of Tutankhamun." 
Another speculation apparently laid to rest by the new study is that Akhenaten had a genetic disorder that caused him to develop the feminine features seen in his statutes, including wide hips, a potbelly, and the female-like breasts associated with the condition gynecomastia.
When the team analyzed Akhenaten's body using medical scanners, no evidence of such abnormalities were found. Hawass and his team concluded that the feminized features found in the statues of Akenhaten created during his reign were done for religious and political reasons.
In ancient Egypt, Akhenaten was a god, Hawass explained. "The poems said of him, 'you are the man, and you are the woman,' so artists put the picture of a man and a woman in his body."
Egyptologist John Darnell of Yale University called the revelation that Akhenaten's appearance was not due to genetic disorders "the most important result" of the new study.
In his book Tutankhamun's Armies, Darnell proposes that Akhenaten's androgynous appearance in art was an attempt to associate himself with Aten, the original creator god in Egyptian theology, who was neither male nor female.
"Akenhaten is odd in his appearance because he belongs to the time of creation, not because he was physically different," said Darnell, who also did not participate in the DNA research.
"People will now need to consider Akenhaten as a thinker, and not just as an Egyptian Quasimodo."
The generally good condition of the DNA from the royal mummies of King Tut's family surprised many members of the team.
Indeed, its quality was better than DNA gathered from nonroyal Egyptian mummies several centuries younger, study co-author Pusch said.
The DNA of the Elder Lady, for example, "was the most beautiful DNA that I've ever seen from an ancient specimen," Pusch said.
The team suspects that the embalming method the ancient Egyptians used to preserve the royal mummies inadvertently protected DNA as well as flesh. 
"The ingredients used to embalm the royals was completely different in both quantity and quality compared to the normal population in ancient times," Pusch explained.
Preserving DNA "was not the aim of the Egyptian priest of course, but the embalming method they used was lucky for us."








Verizon Swallows Hard And Embraces Skype


Verizon bowed to the inevitable today andofficially embraced Skype on its smartphones, starting with Blackberry and Android devices. Verizon customers will now be able to bypass the outlandish international calling rates on their mobile phones and make free Skype-to-Skype calls or use their much cheaper Skype Out minutes instead. Skype’s text IM will also work on the phones.
VoIP applications like Skype’s have gone from facing resistance from the carriers to a reluctant acceptance. Skype already offers one of the most popular apps on the iPhone, and at least it encourages more data usage, which subscribers do pay for. Skype accounted for 12 percent of all international calling minutes last year, and that number will just keep going up.
Apps like Skype, along with Web browsing and email, will get consumers hooked on bigger and bigger data plans. Verizon wants to sell that data pipe and fill it with the most attractive applications. It might lose out at first as people migrate from making overpriced international calls, but over time building out its recurring data subscription revenues will be a bigger business than international call revenues, which it must share with carriers in other countries and typically are sporadic for most subscribers.
Update: Some more details from Andy Abramson at VoIP Watch . He reports that the Skype calls actually will not go over Verizon’s 3G network, but rather over its regular voice network until they hit a network operations center where they will be transferred over to Skype’s Level 3 backbone. This makes more sense, since a high volume of Skype calls over Verizon’s wireless 3G data network could overwhelm it. By striking this deal, Verizon treats the calls as regular local voice calls before passing them off to Skype. So it is actually saving its data network for other uses. But you’ve got to wonder what kind of deal Skype struck and what, if any, share of Skype Out revenues Verizon will collect for calls originating from Verizon cell phones.

Why the Technology Sector Should Care About Google Books


Antitrust lawyer and Open Book Alliance  leaderGary Reback has been called the “antitrust champion” and the “protector of the marketplace” by the National Law Journal, and has been at the forefront of many of the most important antitrust cases of the last three decades. He is one of the mostvocal opponents of the Google Books settlement. Iinterviewed Reback a few months ago, and Google Books was one of the topics we discussed. In the column below, Reback discusses Google Books and its ties to Google search.
This Thursday leaders of the international publishing industry will watch with bated breath as a federal judge in New York hears arguments over whether to approve the Google Book Settlement.
More a complicated joint venture among Google and five big New York publishers than the resolution of pending litigation, the proposed settlement once promised unprecedented access to millions of out-of-print books through digital sales to consumers and online research subscriptions for libraries. But with the passage of time and the ability to examine the deal more closely, the promises proved illusory. The big publishers, as it turns out, have reserved the right to negotiate secret deals with Google for the books they claim through the settlement.
Meanwhile, torrents of outrage rained down on the New York court – from authors whose ownership rights will be appropriated through the settlement’s procedures, from librarians fearful of price exploitation by Google, from privacy advocates worried that Google will monitor the reading habits of library patrons, from libertarians incensed over the use of a legal procedure to effect the widespread appropriation of property, from digital booksellers concerned about Google’s unfair advantage in the marketplace.
Actually, those in the tech community should be watching the settlement proceedings more closely than anyone else. We have the most to lose if the deal is approved in its present form because, at bottom, the Google Book Settlement is not really about books. It’s really about search, the most important technology in the new economy.
According to the Department of Justice, Google dominates the market for search advertising and search syndication on the Web, with greater than a 70% share in both markets. These markets are difficult to enter because of powerful network effects and scale characteristics. Recent entry has been all but futile; indeed, the company with the second largest share, Yahoo, is leaving the market.
The search markets are special and different – even from other web markets. Google’s dominant share in these markets means that substantial numbers of web-based enterprises secure much of their business through “referrals” from Google’s search engine or advertisements placed by Google’s ad platform. The dominant market share makes Google the arbiter of each web business (books or medical supplies, as examples). In each case, Google decides which company succeeds and which company fails by its placement in search results and ad listings on the Google site.
The industry’s fear of Google has grown exponentially, right along with the company’s influence on web commerce. Not six months ago a prominent executive from a top web site – who withheld his name for fear of retribution – made an astounding proposal in a TechCrunch post. Noting from his own experience the potential for abuse inherent in Google’s power, the executive called for government regulation of the search markets to prevent manipulation of search results and ad listings.
The last six months have confirmed the anonymous executive’s worst fears. Once upon a time, Google claimed it employed neutral, mathematically-based algorithms to prioritize search in ad listings. But last November Google admitted to the Washington Post that only search results from Google’s content competitors are listed according to neutral algorithms. Search results from Google’s own properties, like maps, news and books, are now listed first, the algorithmnotwithstanding. Even more recently Google admitted that it changes the rank ordering of paid search ads to prioritize its own company messages.
Whatever the advisability of government regulation, few would dispute that we need more and better competition in search to curb Google’s power. But Google is doing its best to keep that from ever happening. That’s where the Book Settlement comes in. Google intends to use the settlement to disadvantage its competitors and to bolster its own position in search.
Google announced its project to scan and digitize books in December 2004. Both commercial and not-for-profit entities started scanning books before Google did. Several other rivals started scanning books shortly after Google announced its project. All of these competitorsscanned (pdf) only books in the public domain or for which they secured the rightsholder’s permission. Google, on the other hand, scanned all books in the collections of some of the nation’s leading research libraries, including those still under copyright, without securing permission from the rightsholders.
In the fall of 2005, five New York publishers along with the Authors Guild sued Google for copyright infringement. After three years of secret negotiations, and without taking a single deposition in the case, the parties announced a settlement on October 28, 2008. Through a legal ploy known as a “class certification” (which must be approved by the court), the plaintiffs who brought the suit now claim to speak for all holders of U.S. copyrights. Their proposed settlement gives Google (among other things) the right, in response to search queries, to display lengthy textual excerpts from just about every out-of-print book with a U.S. copyright (unless the rightsholder affirmatively objects) – tens of millions of books, in all.
Very recent results from scientific studies of web searching explain why Google has spent enormous amounts of money to acquire the digital rights to vast numbers of old, dusty books. Most search queries are directed to popular subjects – shopping, travel, medical information, etc. Some queries, though, are directed to more obscure subject matter. These are known as “rare,” “obscure,” “esoteric,” or, sometimes, “tail” queries, in reference to the “tailing off” portion of a graph showing the frequency distribution of a population (search queries, in this case) exhibiting the Pareto principle, known to everyone who sells products as the 80-20 rule. Most queries are directed to a few (relatively speaking) popular subjects and therefore show up in the “fat” part of the frequency curve. The frequency of increasingly obscure queries “tails off” asymptotically, providing a “long tail” to the right of the “fat” part of the curve.
For a time, computer scientists thought that most obscure queries were generated by only a few users (again, speaking relatively), and, hence, search engines could ignore obscure tail queries and still serve the great bulk of the user population. But research has shown that just about everyone makes a rare query from time to time. And, people decide which engine to use for their everyday search needs based on the engine’s ability to satisfy these rare queries, just as one would expect in a world that values “one-stop shopping.” Stated more formally, satisfying demand in the tail increases consumption in the “head” or fat part of the distribution curve.
Google will get an enormous advantage over its search competitors if it can support (i.e., respond satisfactorily to) tail queries that its competitors cannot. Scientific research shows that supporting tail queries produces a disproportionately large increase in overall user satisfaction – i.e., disproportionately increases the size of the user population highly satisfied with the engine’s performance. In fact, according to the most recent study, satisfying an additional 1% tail queries increases overall user satisfaction with the engine more than 5% — this, in a market in which companies battle fiercely to wrest even a tenth of a point in market share away from Google’s control.
Digital rights to virtually all out-of-print books will provide Google with a decisive advantage in responding to tail queries. Google created its book database by scanning the collections of the nation’s leading research libraries. These libraries consist largely of academic works on a wide variety of obscure subjects. The books contain information relevant to all kinds of rare queries. Much of the older information in the books might not be available from other sources, at least on the public web. Whatever the publication value of these books, they provide an enormous advantage in search. Indeed, presentations by Google within the last couple of months confirm that the company expects to use text from digital books to satisfy many of its users’ tail queries. If Google can stretch its advantage even further and deny its search rivals the ability to integrate the same corpus of books, Google’s lead in search will become insurmountable.
The proposed settlement does just that, leaving Google’s search competitors out in the cold. The settlement provides no means at all for competitors to get rights to so-called “orphan works” – in-copyright books whose rightsholders cannot be located. According to the parties’ court filings made just last week, ownership has been claimed for only about one million books out of the more than 12 million books scanned and the 170 million unique works identified by Google, leaving the company with exclusive digital rights to well over 90% of U.S. books. In addition, the settlement sets up procedures that make it easy for Google to clear rights to all other out-of-print works where rightsholders can be located, but leaves rivals without a mechanism to easily resolve disputes over ownership and copyright status that preclude competitive distribution. If approved in its current form, then, the settlement will solidify Google’s hold on the search market by giving the company exclusive rights to millions upon millions of books.
Under some circumstances, Google might be entitled to a competitive advantage that it secured through superior foresight. But, that’s not what happened here. The publisher plaintiffs demanded that Google’s competitors respect claims of copyright in their scanning, even as they secretly negotiated (pdf) with Google to give that company the settlement deal the plaintiffs never offered to Google’s competitors. The Department of Justice made the point most clearly in its brief. Google’s search dominance, DoJ said, may be further entrenched by its “exclusive access to content” through the settlement.
This outcome has not been achieved by a technological advance in search or by operation of normal market forces; rather, it is the direct product of scanning millions of books without the copyright holders’ consent and then using [class action procedures] to achieve results not otherwise obtainable in the market.
Permitting a company to solidify its dominance over all of web commerce through controversial legal stratagems rather than open market competition invites economic disaster. Likely, the judge will see Google’s ploy in that light, just as the Justice Department did. If not, government regulation might well be our only recourse.

Thursday, February 11, 2010

Facebook Mobile Hits 100 Million Users, Growing Faster Than On Desktops



For years, one of the most popular ways to access Facebook has been from mobile phones. The company has done quite a bit to make this possible, offering everything from SMS messaging functionality to web-based mobile sites and native applications for most smartphone platforms. Today, the company has announced that 100 million Facebook users are tapping into these mobile services, up from 65 million users last September.
Of course, Facebook has grown by over a hundred million members since the last milestone, so this increase isn’t a big surprise. But mobile growth seems to be accelerating even faster than Facebook is acquiring new members — Facebook had 65 million mobile users in September, and less than a week later announced that it had hit 300 million total active users (in other words, around 21.7% of users were using Facebook mobile).

Google Plans To Deliver 1Gb/sec Fiber-Optic Broadband Network To More Than 50,000 Homes



Google is getting into the broadband business. The company plans to deploy its own “experimental” fiber-optic network to at least 50,000 homes, perhaps as many as 500,000. The fiber-optic network will deliver speeds of 1 gigabit-per-second, which is more than 20 times faster than residential fiber optic services offered today in the U.S. The company writes on its blog:
We’re planning to build and test ultra high-speed broadband networks in a small number of trial locations across the United States. We’ll deliver Internet speeds more than 100 times faster than what most Americans have access to today with 1 gigabit per second, fiber-to-the-home connections. We plan to offer service at a competitive price to at least 50,000 and potentially up to 500,000 people
The service will be competitive in price to today’s broadband services from cable and telephone companies, but it will be much faster. Verizon and Comcast must be thrilled. Google says it is doing this on an trial basis to promote new killer apps that will take advantage of the faster speeds, experiment with better ways to deploy fiber to the home, and create pressure for more open access to broadband in general. It sees its effort as complementary to the U.S. government’s national broadband deployment plans, which it also supports. Communities and municipalities who would like to be considered for Google’s service can apply here.
Google owns its own vast network of dark fiber around the globe to connect its data centers, speed up search, and lower its cost of streaming billions of videos a month on YouTube. With this project, Google is taking its first step in connecting that fiber backbone to consumer’s homes. It is not clear what Google services will come with a broadband subscription, but it is a safe bet that Google will be the default search and Gmail will be the default email. Maybe they can throw in Google Voice service and an Android phone that lets you talk over WiFi.

Sunday, February 7, 2010

Kazaa Takes A Swing At Symantec After Adware Accusations


The history of P2P file sharing service Kazaa (which actually started life as “KaZaA”) is known to most of us born in the eighties or before, and consists mainly of copyright related lawsuits and adware-ridden software.
The gist of the story can be found on its Wikipedia profile, but what many seem to forget in present times is that the service is still around, serving users an unlimited amount of (licensed) songs for a $20 monthly subscription fee.
Recently, a Symantec security program apparently identified the Kazaa desktop client as high-risk, flagging the software as adware. This prompted Brilliant Digital Entertainment, the company that operates Kazaa, to issue a special notice / consumer alert to its customers.
And it isn’t pulling any punches.
While boasting about the fact that Kazaa is now a legitimate business offering over one million fully licensed tracks to its customers, Kazaa claims Symantec for the second time in recent weeks incorrectly identified it as being high risk. As a result, the company says, a subset of users were unable to use Kazaa because Symantec’s security software flagged it as adware. Some of its users were apparently “sufficiently spooked by Symantec’s unilateral action” after those warnings that they followed its advice to remove Kazaa.
In an angered statement, the company adds:
Symantec had justified turning off the music for some of Kazaa customers by flagging files in the Kazaa music plug-in application as high risk due to the files being used for serving advertisements. As a result Kazaa customers or subscribers running Norton AV are having these files stripped from the application which prevents them from using the service.
It continues:
Symantec’s error, hot on the heels of a similar mistake against Spotify, highlights the potential for anti-virus companies to do more harm than good in the effort to displace pirate operations from the on-line marketplace.
After the Spotify incident (Symantec classified the music streaming service as a Trojanabout a week ago), the security software company apologized on Twitter. It’ll be interesting to see how they handle this notice from Kazaa.

"Super Earth" May Really Be New Planet Type: Super-Io


Oceans of lava might bubble on its surface. Hot pebbles may rain down from the sky. But the extrasolar planet CoRoT-7b is considered to be the most Earthlike world yet found outside our solar system.
A recent study, however, suggests that Earth might not be the best basis for comparison. Instead, the authors argue, CoRoT-7b is the first in a new class of exoplanets: a super-Io.
Like Jupiter's moon Io, CoRoT-7b could easily be in the right kind of orbit to experience what's known as tidal heating, according to study co-author Rory Barnes of the University of Washington in Seattle.
On Io, tidal heating is a result of the crust being constantly deformed by the push and pull of Jupiter's gravity. This action generates enough internal heat to drive hundreds of active volcanoes—and the same could be true for CoRoT-7b, Barnes said.
But unlike Io, CoRoT-7b closely orbits a star, not a planet, so tides aren't its only source of heat. Based on previous observations, astronomers know that CoRoT-7b's surface is between 1,832 and 2,732 degrees Fahrenheit (1,000 and 1,500 degrees Celsius).
That's hot enough for there to be "ponds or possibly even oceans of magma," Barnes said. Scientists also know that the planet is tidally locked, which means that only one side ever faces the star.
"There could be volcanism on the back side of the planet," Barnes said. "It could be that on one side the surface is molten, and on the other side there's raging volcanoes."
CoRoT-7b was found using the French-led planet-hunting mission CoRoT, which looks for periodic dips in starlight caused by orbiting bodies passing in front of—transiting—a star, as seen from Earth.
When CoRoT-7b's discovery was announced in February 2009, astronomers hailed the world as the smallest exoplanet yet found orbiting a sunlike star. 
From CoRoT-7b's transits, astronomers could tell that the planet is about twice the size of Earth, which is approximately 7,920 miles (12,760 kilometers) wide. Io measures roughly 2,260 miles (3,630 kilometers) across.
Later studies measured CoRoT-7b's mass and density and confirmed that the planet is rocky. Based on these characteristics, CoRoT-7b was dubbed a super-Earth.
The term is one of a handful—such as "hot Jupiter" and "super-Neptune"—being used to informally classify exoplanets based on how closely they resemble worlds in our solar system.
In the recent study, presented last month at a meeting of the American Astronomical Society, Barnes and colleagues looked at the possible orbits for CoRoT-7b based on its size and mass, its proximity to its star, and its interactions with a nearby sister planet, CoRoT-7c.
The researchers found that even a slight eccentricity in CoRoT-7b's orbit would generate enough tidal heating to spawn bunches of volcanoes, making the planet much more Io-like than Earthlike.
For starters, just as Io circles close to massive Jupiter, CoRoT-7b orbits very close to its host star, so the influence of gravity is especially strong, Barnes said.
What's more, both Io and CoRoT-7b are tidally locked. In Io's case, this means that one side always faces Jupiter. That side of the moon is being tugged so much harder by gravity that the otherwise round world becomes slightly elongated, with a bulge around the middle.
"Earth does this—we have a tidal bulge due to interactions with the sun and our moon," Barnes noted. "The ocean tides are the result of [gravitational] tides, but rock is also distorted due to tidal effects."
In addition, Io maintains an irregular, elliptical orbit due to interactions with other Jovian moons close by, so its distance to Jupiter changes over time. As Io gets closer to Jupiter, it becomes more elongated, and as it moves away it becomes more spherical.
"If you had a tennis ball and you kept squeezing it, you would get heat from friction," Barnes said. "For Io it's like that, except you're doing it to a planet."
For now, CoRoT-7b is too distant to allow for current techniques and telescopes to accurately trace the planet's orbit, so whether the world truly resembles Io remains a mystery.
But "I think they have a pretty good case," said Rosaly Lopes, a planetary scientist with NASA's Jet Propulsion Laboratory in Pasadena, California.
After all, volcanism on Io had been predicted shortly before the Voyager spacecraft actually spotted the moon's volcanic plumes in 1979, she said. (See a picture of a giant plume on Io.)
"Stan Peale and colleagues ... analyzed the orbit of Io and said it would have tidal heating," Lopes said. The very next week, pictures from Voyager revealed about a dozen plumes, and later images from the Galileo spacecraft found more than 170 active volcanoes.
Peale, now a professor emeritus at the University of California, Santa Barbara, agreed that "the authors' conclusions are viable," noting that the presence of a second planet near CoRoT-7b means that the planet's orbit could vary "sufficiently to heat the interior to a super-Io state, leading to Io-like volcanic surface activity."
According to JPL's Lopes, "what's interesting here is that they're showing worlds like Io probably exist in other solar systems. But whether [CoRoT-7b] actually has active volcanism at the moment is going to be very difficult to prove."
It's possible spacecraft such as the Spitzer Space Telescope could see gases coming from CoRoT-7b's volcanoes, study author Barnes said.
"It might have a huge cloud of volcanic gases orbiting the planet. We could maybe see it in [light signatures] using Spitzer, but that would be very hard, because the planet is so far and so faint."
Overall, Barnes thinks similarly hot, rocky worlds will start turning up in the tens of hundreds as current planet-hunting missions such as CoRoT and the recently launched Kepler spacecraft peer deeper into the sky. 
"I think of Kepler as a super-Io detector," Barnes said. And each super-Io found will be "a stepping stone to finding real super-Earths."