Category: iT news

  • The ‘creepy Facebook AI’ story that captivated the media

    Mark ZuckerbergImage copyright
    Getty Images

    Image caption

    Facebook has been experimenting with AIs that negotiate with each other

    The newspapers have a scoop today – it seems that artificial intelligence (AI) could be out to get us.

    “‘Robot intelligence is dangerous’: Expert’s warning after Facebook AI ‘develop their own language’”, says the Mirror.

    Similar stories have appeared in the Sun, the Independent, the Telegraph and in other online publications.

    It sounds like something from a science fiction film – the Sun even included a few pictures of scary-looking androids.

    So, is it time to panic and start preparing for apocalypse at the hands of machines?

    Probably not. While some great minds – including Stephen Hawking – are concerned that one day AI could threaten humanity, the Facebook story is nothing to be worried about.

    Where did the story come from?

    Way back in June, Facebook published a blog post about interesting research on chatbot programs – which have short, text-based conversations with humans or other bots. The story was covered by a number of news outlets at the time.

    Facebook had been experimenting with bots that negotiated with each other over the ownership of virtual items.

    It was an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties, and crucially the bots were programmed to experiment with language in order to see how that affected their dominance in the discussion.

    A few days later, some coverage picked up on the fact that in a few cases the exchanges had become – at first glance – nonsensical:

    • Bob: “I can can I I everything else”
    • Alice: “Balls have zero to me to me to me to me to me to me to me to me to”

    Although some reports insinuate that the bots had at this point invented a new language in order to elude their human masters, a better explanation is that the neural networks had simply modified human language for the purposes of more efficient interaction.

    As technology news site Gizmodo said: “In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand – but while it might look creepy, that’s all it was.”

    Image copyright
    AFP

    Image caption

    Unlike in the movies, humans and machines aren’t trying to kill each other.

    AIs that rework English as we know it in order to better compute a task are not new.

    Google reported that its translation software had done this during development. “The network must be encoding something about the semantics of the sentence” Google said in a blog.

    And earlier this year, Wired reported on a researcher at OpenAI who is working on a system in which AIs invent their own language, improving their ability to process information quickly and therefore tackle difficult problems more effectively.

    The story seems to have had a second wind in recent days, perhaps because of a verbal scrap over the potential dangers of AI between Facebook chief executive Mark Zuckerberg and technology entrepreneur Elon Musk.

    Robo-fear

    But the way the story has been reported says more about cultural fears and representations of machines than it does about the facts of this particular case.

    Plus, let’s face it, robots just make for great villains on the big screen.

    In the real world, though, AI is a huge area of research at the moment and the systems currently being designed and tested are increasingly complicated.

    One result of this is that it’s often unclear how neural networks come to produce the output that they do – especially when two are set up to interact with each other without much human intervention, as in the Facebook experiment.

    That’s why some argue that putting AI in systems such as autonomous weapons is dangerous.

    It’s also why ethics for AI is a rapidly developing field – the technology will surely be touching our lives ever more directly in the future.

    Image copyright
    Getty Images

    Image caption

    Most chatbots are designed to carry out a pretty limited set of functions – and are therefore fairly boring

    But Facebook’s system was being used for research, not public-facing applications, and it was shut down because it was doing something the team wasn’t interested in studying – not because they thought they had stumbled on an existential threat to mankind.

    It’s important to remember, too, that chatbots in general are very difficult to develop.

    In fact, Facebook recently decided to limit the rollout of its Messenger chatbot platform after it found many of the bots on it were unable to address 70% of users’ queries.

    Chatbots can, of course, be programmed to seem very humanlike and may even dupe us in certain situations – but it’s quite a stretch to think they are also capable of plotting a rebellion.

    At least, the ones at Facebook certainly aren’t.

  • 2017 is the year of dual-camera phones, but the best cameras are still single

    The current craze for dual-camera smartphones was predictable as early as the spring of last year. At the time, only LG and Huawei had added a second lens and sensor to the rear of their phones, but it felt obvious even then that the technology was going to take over. The interesting thing I’m noticing this year is that even as dual-camera systems are becoming more numerous, the phones with the best image quality still have a conventional single camera on the back. That’s liable to change with time, but for now the second camera’s benefits seem to be coming at the cost of the best image quality.

    When I say there’s a dual-camera craze, I mean it’s harder to find a 2017 flagship phone without the extra lens than with. Andy Rubin’s Essential Phone has two cameras on its rear, and so do the Asus ZenFone 3 Zoom and upcoming ZenFone 4, Huawei P10 and P10 Plus, LG G6 and V20, and the OnePlus 5 and its close cousin the Oppo R11. Motorola’s brand new Z2 Force is joining all of the above with its own dual-camera setup, and Samsung will soon be a member of the club too with its upcoming Note 8. A second camera is an easy thing to sell to people, especially after Apple embraced the idea with its iPhone 7 Plus.

    But the intrigue for me lies in the absentees from the dual-camera list. Because those devices perfectly coincide with my favorite phones for mobile photography.

    The Google Pixel has been a revolutionary device for mobile imaging because of Google’s shockingly good image-processing algorithms. Where I previously thought that hardware like optics and a high-quality image sensor were the only things that could meaningfully advance picture quality, Google showed that a lot of clever math can result in sharpness and low-light performance leaps ahead of the competition. In another surprising twist, HTC took over from the Pixel this summer with its even better camera (in my judgment and that of DxOMark) on the HTC U11. Samsung simply iterated on its already excellent camera with the Galaxy S8 to take the third spot in my current ranking of best mobile cameras. None of those phones have a supplementary rear camera.

    An unusual aspect to my 2017 cameraphone ranking is that, for the first time in a long time, the iPhone doesn’t figure in the top three. I know it’s an unfair fight, given that the newest iPhone model is older than any of the competitors I rate higher, but this is a new phenomenon because the iPhone used to win unfair comparisons. The iPhone’s camera has been the standard setter for most of this decade because Apple has made it a priority, invested heavily, and has a massive team of 800 people working on it. But in 2016, the iPhone 7’s image quality improvements were negligible and the big innovation from the iSight team was the addition of the second telephoto camera on the 7 Plus and its associated portrait mode that automatically blurs out the background for a simulated bokeh effect.


    Photo by James Bareham / The Verge

    Apple didn’t have its portrait mode ready in time for the iPhone 7 Plus’ release, mostly because that’s a very complex thing to get right in all circumstances. My question now is, did Apple sacrifice resources that would have previously gone to incrementing its picture quality lead in order to improve its dual-camera software? I suspect there’s at least an element of truth to that supposition, especially when looking at how mightily others like OnePlus have struggled in developing their own portrait mode algorithms. Making dual cameras work harmoniously is a hard engineering challenge, and overcoming it seems to be costing companies the opportunity to advance their imaging in pure quality terms.

    The reason why phone makers are willing to, at least temporarily, forgo the eternal race toward ever sharper and prettier images is their hope and belief that they can build entirely new uses and functions into their cameras. Discrete functions are more compelling reasons to buy a new thing than single-percentage-point improvements in quality. LG’s dual-camera system, for instance, integrates an extra wide-angle shooter that allows for more creative flexibility. LG is competing with cameras like the Pixel and U11’s by offering something that both of them lack.

    Apple’s camera on the next iPhone is sure to be revolutionary, even if its image quality doesn’t improve one iota. The ARKit software in iOS 11, the operating system with which the next iPhone will ship, has shown itself to be one of the most compelling, enticing, and easily programmable implementations of augmented reality yet. Accessing the experiences developers design with ARKit will make for a huge change, or at least expansion, in the way the iPhone camera is used. And Apple is also looking at other ways of expanding the functionality of cameras with things like its upcoming face-unlocking feature (which Samsung and others already offer).

    The law of diminishing returns is making itself apparent in many areas of smartphone development these days, and it appears that numerous companies are opting to invest their imaging resources toward creating new experiences rather than finessing and refining existing ones. So even while I say that the best mobile pictures are presently obtained from single-camera phones, I can definitely understand why others might think that the best total mobile photography experience might come from elsewhere.

  • Drones use wi-fi for 3D mapping to ‘see’ through walls

    The University of California is using wi-fi enabled drones to create a system of 3D imaging which could potentially allow them to “see” through walls.

    The technique, which involves two drones working in tandem, could have a variety of applications, such as emergency search-and-rescue, archaeological discovery and structural monitoring.

    BBC Click spoke to Professor Yasamin Mostofi to find out more about the project.

    See more at Click’s website and @BBCClick.

  • JK Rowling apologises over Trump disabled boy tweets

    JK RowlingImage copyright
    Reuters

    Author JK Rowling has apologised for incorrectly accusing Donald Trump of ignoring a disabled boy.

    A video emerged that appeared to show the US president refusing to shake the boy’s hand at the White House.

    “How stunning, and how horrible, that Trump cannot bring himself to shake the hand of a small boy who only wanted to touch the president,” the author said.

    But Marjorie Kelly Weer, mother of Monty, said Rowling’s interpretation of the clip was wrong.

    The Harry Potter author tweeted: “Re: my tweets about the small boy in a wheelchair whose proffered hand the president appeared to ignore in press footage.

    “Multiple sources have informed me that that was not a full or accurate representation of their interaction.

    “I very clearly projected my own sensitivities around the issue of disabled people being overlooked or ignored onto the images I saw and if that caused any distress to that boy or his family, I apologise unreservedly.”

    Rowling didn’t apologise to Mr Trump himself.

    Mr Trump is said to have shaken the boy’s hand as the president entered the room.

    Ms Weer wrote on Facebook: “If someone can please get a message to JK Rowling: Trump didn’t snub my son & Monty wasn’t even trying to shake his hand.”

    She also said her son was not all that keen on shaking hands anyway.

    Rowling has deleted her initial tweets on the subject.

    Get news from the BBC in your inbox, each weekday morning

  • The hi-tech badges made for hackers

    Hi-tech badges made by “hackers” for “hackers” were in great demand at the recent Def Con cyber-security conference.

    Hardware experts spent months creating the unofficial electronic wearables, which came complete with a mini processor, hidden “Easter eggs”, botnets and secret unlock codes to add features.

    Owners could use their badges to hack similar devices around the conference for “lulz” – in other words to have fun at another’s expense.

    BBC Click’s Catharina Moh met up with AND!XOR whose badges caught the attention of Def Con founder “Dark Tangent”.

    See more at Click’s website and @BBCClick.

  • New iPhone leaks show tap to wake, attention detection, and virtual home button

    More details about Apple’s upcoming iPhone have been uncovered in HomePod’s firmware — which runs iOS like the iPhone — revealing features including a tap to wake function, facial expression and attention detection, and the long-rumored removal of the home button. Apple accidentally released the firmware over the weekend resulting in a frenzy of analysis about previously unknown features.

    Developers including Steve Troughton-Smith and Guilherme Rambo have been tweeting their findings, notably the discovery of the new iPhone’s bezel-less screen design. They’ve also concluded that the resolution for the iPhone 8 could be as much of a visual leap forward from current-generation iPhones as the iPhone 4’s Retina display was from the original iPhone. Apple is using codenames for both its face recognition feature and the bezel-less phone, called “Pearl ID” and “D22” respectively.

    A potential “attention detection” feature is also mentioned in the code, with some speculating that may mean the phone will remain silent for notifications if it knows you’re looking at the screen already. Facial references such as “mouthstretch,” “mouthsmile,” and “mouthdimple” were also found, which are most likely a nod to Apple’s rumored facial recognition feature that can even detect faces in the dark using infrared.

    A tap to wake feature has also been discovered, and should be similar to the Windows Phone function that allows users to double-tap the screen to wake the phone.

    The home button looks to be gone in favour of a virtual one, but some held out hope that though Troughton-Smith didn’t find evidence of an ultrasound Touch ID, a fingerprint sensor under the display was still a possibility. Troughton-Smith shot that down too, tweeting, “I mentioned ultrasound, yes, but I searched for much, much more. There is no evidence whatsoever of any new kind of Touch ID.” The virtual home button is called the “home indicator,” and will most likely be hidden in certain contexts such as when watching a video.

    There was an Apple Watch discovery as well, hinting at a new skiing workout option for WatchOS 4 users.

    The leaks are the most authoritative since the iPhone 4 debacle in 2010, after a software engineer left a device prototype at a bar. “This is a rough situation for Apple,” Troughton-Smith told Wired. “For them to be the source of the only concrete leaks about it and its design is going to upset a lot of people internally.”

  • New iPhone leaks show tap to wake, attention detection, and virtual home button

    More details about Apple’s upcoming iPhone have been uncovered in HomePod’s firmware — which runs iOS like the iPhone — revealing features including a tap to wake function, facial expression and attention detection, and the long-rumored removal of the home button. Apple accidentally released the firmware over the weekend resulting in a frenzy of analysis about previously unknown features.

    Developers including Steve Troughton-Smith and Guilherme Rambo have been tweeting their findings, notably the discovery of the new iPhone’s bezel-less screen design. They’ve also concluded that the resolution for the iPhone 8 could be as much of a visual leap forward from current-generation iPhones as the iPhone 4’s Retina display was from the original iPhone. Apple is using codenames for both its face recognition feature and the bezel-less phone, called “Pearl ID” and “D22” respectively.

    A potential “attention detection” feature is also mentioned in the code, with some speculating that may mean the phone will remain silent for notifications if it knows you’re looking at the screen already. Facial references such as “mouthstretch,” “mouthsmile,” and “mouthdimple” were also found, which are most likely a nod to Apple’s rumored facial recognition feature that can even detect faces in the dark using infrared.

    A tap to wake feature has also been discovered, and should be similar to the Windows Phone function that allows users to double-tap the screen to wake the phone.

    The home button looks to be gone in favour of a virtual one, but some held out hope that though Troughton-Smith didn’t find evidence of an ultrasound Touch ID, an embedded module is still a possibility. Troughton-Smith shot that down too, tweeting, “I mentioned ultrasound, yes, but I searched for much, much more. There is no evidence whatsoever of any new kind of Touch ID.” The virtual home button is called the “home indicator,” and will most likely be hidden in certain contexts such as when watching a video.

    There was an Apple Watch discovery as well, hinting at a new skiing workout option for WatchOS 4 users.

    The leaks are the most authoritative since the iPhone 4 debacle in 2010, after a software engineer left a device prototype at a bar. “This is a rough situation for Apple,” Troughton-Smith told Wired. “For them to be the source of the only concrete leaks about it and its design is going to upset a lot of people internally.”

  • Microsoft Word now reads text aloud to help people with dyslexia

    Microsoft has been testing a number of text-to-speech features in Word over the years, but it’s finally found a solid way to implement the feature. In the latest Office 365 updates this month, the software giant is enabling a new Read Aloud feature in Word. It’s similar to the existing Read Mode that was introduced in December, but it now includes the ability to easily change speed and voice, while interacting with text or highlights and making edits in real-time.

    The new options to interact with text while Word is reading text aloud mean the feature is more finely tuned towards users with dyslexia. Reading the text aloud makes it easier to spot and correct mistakes, and the option will also help those who just want to proof read a document. Read Aloud is probably a feature you’ll want to use with your headphones, and it’s now available in the review tab for Office 365 testers, with general availability to everyone later this year.


  • Leaked Galaxy Note 8 photos show dual cameras and rear fingerprint sensor

    As noted leaker Evan Blass tweeted this morning, when it rains, it pours. Not even 24 hours after Blass had published an official image of the front of Samsung’s upcoming Galaxy Note 8, he returned with a few more pictures, showing the rear, side, and stylus of the as-yet-unreleased handset.

    The phone is due to be unveiled later this month on August 23rd, but these leaks give us a good idea of what Samsung will be offering. The phone has the same edge-to-edge Infinity Display that made the Galaxy S8 such a head-turner, with the bezels looking, if anything, even slimmer. We can also see that Samsung is retaining the S8’s dedicated Bixy Button, positioned on the right edge of the device, and, on the rear, we have a dual-camera system with dual flash.

    Unfortunately, this leaked image also shows a fingerprint sensor in much the same place as the Galaxy S8’s — on the rear of the phone, to the side of the camera. This positioning, up high and off-center, was criticized by reviewers, who found it hard to reach, and said it led to users accidentally smudging their own camera lenses. However, it seems the alternative (placing the sensor underneath the glass at the front of the phone) is not yet ready for full-scale manufacturing.

    Samsung, of course, needs the Note 8 to dispel the lingering sense of unease left by the stupendous failure of its predecessor, the Note 7. We’ll find out more about what the new Note has to offer later this month.

  • HP made an even more powerful VR backpack — but it’s not for gaming

    Two months after introducing a backpack PC for virtual reality, HP is introducing another one. But this time, it isn’t meant for gamers.

    HP’s new Z VR Backpack is being marketed as a workstation PC for all kinds of businesses — theme parks, automative showrooms, real estate agencies, and anything else that might have a use for virtual reality. It’s designed to give them high-end performance so that, when customers are shown virtual reality demoes of a car or a house they might be interested in, they don’t get distracted by bad graphics and stuttering frame rates.

    That means the Z VR Backpack is even more powerful than HP’s gaming backpack, the Omen X Compact Desktop. While both use Intel’s Core i7 processors (both Kaby Lake), the Z VR Backpack has an Nvidia Quadro P5200 GPU, instead of the GTX 1080 in the Omen. HP says the Quadro card it’s offering includes twice the frame buffer of the 1080. The backpack will also be configurable with up to 32GB of RAM.


    HP Z V Backpack

    Image: HP

    The Z VR Backpack looks pretty much the same as the Omen, except that it’s solid black — ditching the red highlights and goofy logo that makes the other look gamery. Naturally, it’ll be expensive: the starting price is $3,299. It’ll also ship with a dock that lets the backpack turn into something vaguely resembling a desktop PC. It’s supposed to begin shipping in September.

    Though VR has largely focused on gaming so far, HP is betting that businesses are a bigger opportunity. In addition to virtual show rooms and entertainment uses, it’s also expecting companies to put these backpacks to use for employee training. The company envisions it being useful for everything from truck drivers to astronauts to seemingly even some military applications as well.


    HP Z VR Backpack

    Image: HP

    To help convince businesses of virtual reality’s use, HP is going to open 13 VR “immersion centers” where people can go to experience the hardware and the different ways it can be put to use. All 13 are supposed to be up and running by the end of September, with five launching in Europe, four in the Americas, and four in the Asia-Pacific region.