An MIT team has created color-changing 3D prints

Here’s another cool project out of MIT’s CSAIL labs. Researchers are looking to bring color-changing properties to the 3D-printing process in an attempt to help reduce material waste in the future. That last bit is admittedly a pretty lofty goal as far as this project is concerned, but at the very least, it could go a ways toward making 3D printing for manufacturing even more compelling for consumers.

The process is built on top of a familiar 3D-printing process that uses UV light to cure a liquid resin into a solid object. What’s new, however, is the addition of photochromic dyes. Once added to a print, the inks create a surface that turns different colors, based on the kind of light it’s exposed to. The researchers call the technology “ColorFab,” playing on a pretty standard 3D-printing naming convention.

The tech is more than just some modern equivalent to those Hypercolor shirts, however. In a call with TechCrunch last week, MIT professor Stefanie Mueller compared the technology to E Ink — explaining that, once light is applied, it holds that color. It’s also more than just a simple color-changing tech — once the resolution is high enough, the system can be used to create complex patterns.

The team’s hope is that the ability to update a product’s surface might stem users’ impulse to frequently buy more junk.

“Everybody wants to have the latest phone, the latest phone case, the latest clothing,” says Mueller. “That basically creates more waste and requires more and more material. We asked the question of whether there’s any way to update existing products without needing new materials.”

It’s a lot to ask — and to put my cynical hat on for the moment, I can’t imagine companies adding features intended to end the cycle of constant product refreshes. But it’s a compelling addition to 3D printing nonetheless, and one that could be added to the process fairly easily in the not so distant future.

Facebook begins privacy push ahead of tough new European law

Facebook will introduce a new privacy center this year that features all core privacy settings in one place, ahead of the introduction of a strict new EU data protection law that takes effect on May 25th. The European Union’s General Data Protection Regulation (GDPR) will restrict how tech companies collect, store, and use personal data. Facebook also says that it’s publishing its privacy principles for the first time, detailing how the company handles user details.

Sheryl Sandberg, Facebook’s chief operating officer said in a speech last week that the new privacy center would give Facebook a “very good foundation to meet all the requirements of the GDPR and to spur us on to continue investing in products and in educational tools to protect privacy.”

The GDPR will enforce rules across the 28-member EU, including a rule that requires companies to report data breaches within 72 hours. Companies must also allow users to export their data and delete it. Under existing “right to be forgotten” provisions, people who don’t want certain data about them online can request companies to remove it. If companies breach the GDPR they would be subject to fines of up to 4 percent of their global annual revenue or €20 million ($24.8 million), whichever is the higher amount.

The new privacy center tool will be more comprehensive and unite key privacy settings rather than spreading them across multiple pages. Facebook currently promotes a more limited feature called Privacy Check-Up that gives you privacy controls over future posts, your profile’s About Me section, and app preferences.

As part of the privacy push, Facebook has also begun running short education videos in users’ New Feeds that teach users how to delete old posts, explain what happens to user information when an account is deleted, and show how to manage data used for Facebook ads. Facebook, which has 2 billion users, said it will show different videos on different privacy topics throughout the year.

“We recognize that people use Facebook to connect, but not everyone wants to share everything with everyone – including with us,” wrote Erin Egan, Facebook’s chief privacy officer in a blog post. Facebook’s privacy principles outline how the company approaches privacy, like giving users control of the data they own, and helping them understand how it’s used and secured. Facebook notes that it’s accountable for maintaining privacy while also conceding that it’s a process of continuous improvement.

Facebook isn’t the only company that’s introducing new privacy tools and ramping up privacy measures. Google recently rolled out a privacy dashboard, helping users identify which Google products are storing their data. Microsoft, which has faced privacy concernsfrom the EU about its collection of data, has also unveiled a new data collection viewer tool.

Americans are using less energy by staying at home

Rosmarie Wirz via Getty Images

Americans are using less energy — paradoxically, by spending more time indoors, according to a new study in the journal Joule. But researchers point out that keeping more lights on was offset by lifestyle changes that kept people inside instead of roaming around offices and retail stores — like, say, online shopping and working from home. Despite energy consumption worldwide increasing every year, this slight uptick in American hermitage reduced national energy demand by 1.8 percent over a year.

Study says e-cigarettes increase risk of cancer and heart disease

    A smoker is engulfed by vapours as he smokes an electronic vaping machine during lunch time in central London on August 9, 2017.
    World stock markets and the dollar slid Wednesday after US President Donald Trump warned of “fire and fury” in retaliation to North Korea’s nuclear ambitions, sending traders fleeing to safe-haven investments. In Europe, equities dived with London losing 0.6 percent, while Frankfurt shed 1.1 percent and Paris fell 1.4 percent. / AFP PHOTO / Tolga Akmen (Photo credit should read TOLGA AKMEN/AFP/Getty Images)

 

Regulators may have had a change of heart about the danger of using e-cigarettes, but scientists would beg to differ. A newly published New York University School of Medicine study indicates that vaping may put you at a “higher risk” of cancer and heart disease. Mice subjected to the equivalent of “light” e-cigarette smoking for 10 years (12 weeks in reality) suffered DNA damage to their bladders, hearts and lungs, in addition to limiting both DNA repair and lung proteins. In short: nicotine can become a carcinogen in your body regardless of how it’s transmitted.

The study isn’t completely shocking when researchers have identified other harmful chemicals. And it’s not conclusive, either. While the testing shows that e-cigarettes are harmful, the highly compressed smoking exposure is far from what you’d see in real life. The study does also acknowledge that the tobacco nitrosamines (known carcinogens) found in body fluids of e-cigarette users are 97 percent lower than in cigarette smokers (but states this is “significantly higher than in nonsmokers”). This puts e-cigarette users on a similar level to users of nicotine patches.

You may not see more definitive results until additional animal testing in a year, and much longer than that for humans. Study author Moon-shong Tang also noted to Bloomberg that it’s not clear whether conventional cigarettes or e-cigarettes would be more harmful.

While there have been studies suggesting that e-cigs are probably less harmful, the study indicates that some nitrosation of nicotine occurs in the human body (in cigarettes it happens in the tobacco curing process). So, theoretically, you’re still facing some of the same dangers. Any “safety” therefore may come from the reduced level of exposure. The findings also support bids to regulate e-cigarettes based on their tobacco-like effects, such as the FDA’s former approach.

This article has been updated to clarify the findings of the study.

Qantas uses 150 acres of mustard seeds to power just ONE 15 hour biofuel flight between LA and Melbourne

The world’s first US-Australia biofuel flight successfully completed its first journey today powered by fuel made from mustard seeds.

The Qantas QF96 plane completed a 15-hour trans-Pacific flight using 24,000 litres of biofuel blend.

Qantas estimates the plane saved around 18,000kg in carbon emissions during the flight.

But while it lowered emissions in the air, the biofuel used to power the single journey took up 150 acres of land to create – an area bigger than the Vatican City.

The QF96 flight from Los Angeles to Melbourne used fuel developed by Canadian agricultural-technology company Agrisoma Biosciences.

It used blended fuel that was 10 per cent derived from the brassica carinata – a type of mustard seed that can be grown by farmers in between regular crop cycles.

Boeing Dreamliner 787-9 reduced carbon emissions by 7 per cent compared with the airline’s usual flight over the same LA to Melbourne route.

By 2020 Qantas aims to have biofuel-based flights running regularly.

But not everyone is convinced that biofuels will be good for the environment.

Last year a new analysis – commissioned by the NGOs BirdLife and Transport and Environment – backed those calling for an end to the use of food-based biofuels.

It argued that demand for biofuels made from food crops has led to an increase in global food prices and is damaging the environement.

The carinata seed used in the latest flight makes high-quality oil with one hectare of seeds (2.47 acres) producing 400 litres of biofuel, writes Traveller.

Within just one day after harvesting the oil can be pressed and used as fuel.

‘It’s a tough crop. It grows where other crops won’t grow. It doesn’t need much water and it’s well understood by farmers,’ said Agrisoma CEO Steve Fabijanski.

‘They can grow it and do well with it.’

Biofuel goes through the same engineering and safety tests as normal aviation fuel.

‘The aircraft is more fuel efficient and generates fewer greenhouse emissions than similarly sized-aircraft and today’s flight will see a further reduction on this route’, said Qantas International CEO Alison Webster.

‘Our partnership with Agrisoma marks a big step in the development of a renewable jetfuel industry in Australia – it is a project we are really proud to be part of as we look at ways to reduce carbon emissions across our operations.’

In 2012, Qantas and Jetstar trialled domestic biofuel flights made from cooking oil.

Other airlines have also incorporated biofuels into commercial flights.

In 2011, Alaska Airlines operated 75 flights on a cooking oil blend and Dutch airline KLM made biofuel flights in 2013.

‘Biojet fuel made from Carinata delivers both oil for biofuel and protein for animal nutrition while also enhancing the soil its grown in’, said Agrisoma CEO, Steve Fabijanski.

‘We are excited about the potential of the crop in Australia and look forward to working with local farmers and Qantas to develop a clean energy source for the local aviation industry.’

Erica, the creepy robot that is so life-like she appears to ‘have a soul’, will replace a Japanese TV news anchor in April

A creepy life-like robot called Erica is set to become a TV news anchor in Japan.

The humanoid. which has one of the most advanced artificial speech systems in the world, will take up her new role in April.

According to her creator Hiroshi Ishiguro, the droid is warm and caring, and may soon have an ‘independent consciousness’.

She has been described as so realistic she could ‘have a soul’.

Very few details have been revealed about Erica’s new job, however Dr Ishiguro said she will use AI to read news put together by humans.

‘We’re going to replace one of the newscasters with the android,’ said Dr Ishiguro, who is director of the Intelligent Robotics Laboratory at Osaka University.

Dr Ishiguro says he has been trying to get his creation on air since 2014.

Her voice may also be used to talk to passengers in autonomous vehicles, writes Wall Street Journal.

Erica was developed with money from one of the highest-funded science projects in Japan, JST Erato.

Although she is unable to move her arms, she can work out where sound is coming from and knows who is asking her a question.

Using 14 infra-red sensors and face recognition technology, Erica can track people in a room.

WHO IS ERICA THE ROBOT?

Roboticist Hiroshi Ishiguro, director of the Intelligent Robotics Laboratory at Osaka University created Erica the robot.

The Erica project is the result of collaboration between Osaka and Kyoto universities.

She was developed with money from one of the highest-funded science projects in Japan, JST Erato.

Although she is unable to move her arms she can work out where sound is coming from and knows who is asking her a question

Although she is unable to move her arms she can work out where sound is coming from and knows who is asking her a question.

She is meant to be a 23-year-old human and has one of the most advanced speech synthesis systems ever developed.

Using 14 infra-red sensors and face recognition technology Erica can track people in a room.

Her ‘architect’ Dr Dylan Glas, says that Erica has learnt jokes, ‘although they’re not exactly side-splitters’, he added.

‘What we really want to do is have a robot which can think and act and do everything completely on its own’, he said.

As well as learning to develop compassion, Erica is also pipped to be a Japanese news presenter sometime this year, and could make a national debut in April.

According to her creator, who also refers to himself as Erica’s ‘father’, this robot can be warm and caring, and may soon have an ‘independent consciousness’.

Erica’s ‘architect’ Dr Dylan Glas, says the robot has learned to tell jokes, ‘although they’re not exactly side-splitters’, he added.

‘What we really want to do is have a robot which can think and act and do everything completely on its own’, he said.

This isn’t the first time Ishiguro has created a robot newsreader.

In 2014, Dr Ishiguro unveiled ultra-realistic robot news anchors called Kodomoroid and Otonaroid at a Tokyo museum.

Speaking at the time, Dr Ishiguro said he hoped they would be useful for research on how people interact with robots.

‘Making androids is about exploring what it means to be human,’ he told reporters, ‘examining the question of what is emotion, what is awareness, what is thinking.’

In a demonstration, the remote-controlled machines moved their lips in time to a voice-over, twitched their eyebrows, blinked and swayed their heads from side-to-side.

Powered by compressed air and servomotors, they were seated but could move their hands.

The speech can be input by text, giving them perfect articulation, according to Dr Ishiguro.

In 2015 Microsoft created the first robot TV anchor Xiaoice who introduced China’s ‘Morning News’.

Using deep learning techniques through Smart Cloud and Big Data, Xiaoice analysed weather data while giving a live broadcast.

The project was a collaboration between Microsoft Applications & Services Group East Asia and Shanghai Media Group (SMG) TV News Centre.

Waymo strikes a deal to buy ‘thousands’ more self-driving minivans from Fiat Chrysler

Waymo, the self-driving unit of Google parent Alphabet, has reached a deal with one of Detroit’s Big Three automakers to dramatically expand its fleet of autonomous vehicles. Fiat Chrysler Automobiles announced today that it would supply “thousands” of additional Chrysler Pacifica minivans to Waymo, with the first deliveries starting at the end of 2018.

Neither Waymo nor FCA would disclose the specific number of vehicles that were bought, nor the amount of money that was trading hands. The manufacturer’s suggested retail price for the 2018 Chrysler Pacifica hybrid minivan starts at $39,995. A thousand minivans would cost $40 million, so this was at the very least an eight-figure deal.

Waymo currently has 600 of FCA’s minivans in its fleet, some of which are used to shuttle real people around for its Early Rider program in Arizona. The first 100 were delivered when the partnership was announced in May 2016, and an additional 500 were delivered in 2017. The minivans are plug-in hybrid variants with Waymo’s self-driving hardware and software built in. The companies co-staff a facility in Michigan, near FCA’s US headquarters, to engineer the vehicles. The company also owns a fleet of self-driving Lexus RX SUVs that is has been phasing out in favor of the new minivans. (The cute “Firefly” prototypes were also phased out last year.)

The partnership is non-exclusive, but its a sign that both Waymo and FCA are happy enough to continue working with each other. The Pacifica satisfies Waymo’s need for a vehicle that can be used to move a good number of people at once. (The Pacifica can hold up to 8 passengers.) And FCA gets to look good standing next to one of the world’s biggest tech giants with some of the best self-driving technology around. FCA is also a member of a self-driving technology partnership with BMW, Intel, and Mobileye.

“In order to move quickly and efficiently in autonomy, it is essential to partner with like-minded technology leaders,” said the automaker’s CEO Sergio Marchionne in a statement. “Our partnership with Waymo continues to grow and strengthen; this represents the latest sign of our commitment to this technology.”

(Of course, the famously quotable Marchionne paints a more realistic portrait in his IRL interviews. For example, he told Bloomberg a few weeks ago, “This business has never been for the fainthearted. The technology changes that are coming are going to make it probably more challenging than it’s ever been.”)

Waymo’s self-driving Pacifica minivans have been tested in 25 cities in the US (most of which are in California) and are currently on the road in five main cities: Atlanta, San Francisco, Detroit, Phoenix, and Kirkland, Washington. Last November, Waymo began test-driving its minivans on public roads in Phoenix without a driver at the wheel. And it has said that it will begin offering rides to members of its Early Rider program in its “fully driverless” minivans in the months ahead.

Google took down over 700,000 bad Android apps in 2017 That’s 70 percent more than 2016’s removals

Google’s numerous safeguards designed to prevent malicious apps from reaching Android users led to the removal of over 700,000 apps from the Google Play Store in 2017, the company said today. That’s a 70 percent increase over the total removals in 2016. “Not only did we remove more bad apps, we were able to identify and action against them earlier,” Google Play product manager Andrew Ahn wrote in a blog post. “99 percent of apps with abusive contents were identified and rejected before anyone could install them.”

Google attributes this success to its improved ability to detect abuse “through new machine learning models and techniques.”

Copycat apps designed to resemble popular mainstays remain a popular method of trying to deceive users, according to Ahn. Google removed over a quarter of a million of these impersonating apps last year. The company also says it kept “tens of thousands” of apps with inappropriate content (pornography, extreme violence, hate, and illegal activities) out of the Play Store. Machine learning plays a key role here in helping human reviewers keep an eye out for bad apps and malicious developers.

“Potentially harmful applications” (PHAs) are apps that attempt to phish users’ personal information, act as a trojan horse for malware, or commit SMS fraud by firing off texts without a user’s knowledge. “While small in volume, PHAs pose a threat to Android users and we invest heavily in keeping them out of the Play Store,” Ahn said.

Last year, Google put all of its malware scanning and detection technologies under the umbrella of Google Play Protect. The Android operating system automatically performs scans on installed applications to hunt for anything that’s out of place, and users can also manually trigger scans of their Android smartphones right in the updates section. (I’ve finally managed to stop hitting this button when checking for new versions of apps, but it took some time.)

Still, bad apps do occasionally slip through Google’s defenses. In August, Google discovered and kicked out 30 apps that were secretly using the devices they were installed on to perform DDoS attacks. Just earlier this month, the company removed 60 games from the Play Store — some of them meant for children — that were found to display pornographic ads. Google says it will continue to upgrade its methods and machine learning models against bad actors trying to trick consumers with apps that violate its policies. Those efforts indeed seem to be paying off in helping Android’s security turn a corner.

Google can now PREDICT when your flight will be delayed with ‘80% accuracy’ using historical airline data

Google can now tell you if your flight is delayed before your airline.

The firm is using artificial intelligence to accurately predict when commercial flights will be late.

Google says it is using historical airline data, including information on previous flight delays, to forecast future problems.

Using artificial intelligence, Google says it can predict delays with ‘80 per cent confidence’.

Google can now tell you about a delay to your flight before your airline.  Google says it is using historical airline data, including information on previous flight delays, to forecast future problems

While that’s probably not good enough to cancel your taxi to the airport, it is accurate enough to hint at any potential issues before you set foot in the terminal.

Google has yet to confirm exactly what ‘historical data’ it uses to work out its predictions, or how far ahead it can predict delays.

Google Flights will only showcase predictions with 80 per cent certainty or above, it has confirmed.

These AI-generated forecasts will appear alongside confirmed delays from the airlines.

Google Flights will also show the reason for a confirmed delay.

How to Fly a Drone With Your Face Send your drone flying by making a ridiculous face at it

It’s nice that consumer drones are getting easier and easier to use, incorporating more safeguards and autonomy and stuff. Generally, though, piloting them does still require some practice and skill, along with free hands and a controller that’s probably more expensive than it should be. This is why we’ve been seeing more research on getting drones set up so that unaltered, uninstrumented, and almost entirely untrained users can still do useful things with them.

At Simon Fraser University, roboticists are seeing how far they can push this idea, and they’ve come up with a system for controlling a drone that doesn’t require experience, or a controller. Or even hands. Instead, you use your face, and it’s totally intuitive and natural. As long as it’s intuitive and natural for you to make funny faces at drones, anyway.

Here is how to control a drone with your face in Canada:

Neutral faces (above) and trigger faces (below).

Ready: The user’s identity and facial expressions are learned and input is provided through touch-based interaction. Hold the drone at eye level, gaze deeply into its camera, and give it your best neutral look. Hold this neutral look until the drone is satisfied that you are consistently neutral. This should take less than a minute, unless you get the giggles. Next, rotate the drone so that it’s sideways, and make a “trigger” face, which is unique from your neutral face. If you’re super boring, you can make a trigger face by just covering one eye, but come on, you’re better than that.

Aim: The robot starts flying and keeps its user centered in its camera view, while the user lines up the trajectory and chooses its power by “drawing back” analogous to firing a bow or slingshot. Place the drone on the ground in front of you, and it’ll take off and over menacingly in front of you. Try and move from side to side to escape, and the drone will remorselessly yaw to keep you in view. Once you have it pointed exactly the wrong way, back away slowly and imagine that there’s a rubber band between you and the drone and it’s getting stretched more and more.

Fly: The user signals to the robot to begin a preset parameterized trajectory. The robot executes the trajectory with parameters observed at the end of the Aim phase. When the drone is facing in the direction you don’t want it to go and you think you’re far enough away, make your trigger face, and the drone will fly off backwards (directly away from you) on a ballistic trajectory, the strength of which is moderated by how far away from the drone you are when you made the face, rather like a slingshot.

Besides the kind of “slingshot” ballistic trajectory shown in the video, the drone could also be commanded to do a “beam” trajectory, where it travels in a straight line, or a “boomerang” trajectory, where it flies out and makes a circle before (hopefully) coming back again. The particular drone used here happened to be a Parrot Bebop slightly modified with an LED strip to provide visual feedback, and while the vision processing was done offboard, there’s no reason why it needed to be, because it doesn’t require a lot of power, according to the researchers.

In tests, this technique for controlling a drone works surprisingly well. The researchers had users try and send the drone through an 0.8-meter diameter hoop located 8 meters away. Most people managed to get the drone within about a meter of the hoop most of the time, although the researchers point out that “the robot did not fly perfectly straight due to the inevitable errors of real-world robotics.” Damn that pesky real world and all its realness!

And finally, here are some conclusions, in no particular order, and lifted straight from the paper because I can’t possibly improve on them:

While the demonstrations in the paper have sent the robot on flights of 45 meters outdoors, these interactions scale to hundreds of meters without modification. If the UAV was able to visually servo to a target of interest after reaching the peak of its trajectory (for example another person, as described in another paper under review) we might be able to “throw” the UAV from one person to another over a kilometer or more… The long term goal of this work is to enable people to interact with robots and AIs as we now interact with people and trained animals, just as long imagined in science fiction… Finally, and informally, we assert that using the robot in this way is fun, so this interaction could have applications in entertainment.

I love that last assertion, because it’s too often overlooked in research—no matter what you’re working on, don’t forget how much fun robots are.

“Ready—Aim—Fly! Hands-Free Face-Based HRI for 3D Trajectory Control of UAVs,” by Jake Bruce, Jacob Perron, and Richard Vaughan from Simon Fraser University in Canada, was presented at the IEEE Canadian Conference on Computer and Robot Vision.

Could wearing goggles help you sleep and lose weight? The smart glasses that treat conditions from vertigo to diabetes

Specs are no longer just for seeing with — there are now a range of high-tech smart ‘goggles’ to treat a host of conditions, from diabetes to vertigo. 

There are even goggles to help you lose weight.

FOR DIABETES AND INSOMNIA

Goggles that shine bright light into the eyes are being tested as a treatment to prevent type 2 diabetes. They are based on the idea that the body clock, which controls the release of hormones, is regulated by light.

Light-detecting cells in the eyes, known as photoreceptors, send signals to the body clock in the brain, which then sets our sleep and wake rhythms.

Recent research by Northwestern University Hospital in the U.S. showed that our body clocks also dictate when the pancreas produces insulin in order to control blood sugar, with our sensitivity to insulin reducing during the night, according to another study published in the journal Diabetes last year.

Re-Timer: Goggles that shine bright light into the eyes are being tested as a treatment to prevent type 2 diabetes.

These circadian rhythms, as they’re known, can be impaired by staying indoors, working irregular hours, or a lack of sunlight in winter.

The goggles, called Re-Timer, have four tiny light-emitting diodes (LEDs) built into the top of the frame — they look like a pair of white specs without lenses. The lights are switched on to expose the eyes to bright light in the morning to increase insulin sensitivity and lower blood sugar.

The goggles are being trialled at Northwestern University with 34 patients with pre-diabetes. They will wear them for an hour each morning for four weeks and their glucose levels will be measured.

The glasses were first developed to treat insomnia and jet lag. For sleep problems, they use green light which stimulates the part of the brain responsible for regulating the sleep-wake cycle.

FOR VERTIGO

Israel-based Spoton Therapeutics’ goggles look like ordinary specs, but the lenses have ‘marks’ — tiny rectangles — on them to help patients with dizziness.

The marks are placed so they are in the patient’s peripheral vision. These reference points are thought to help steady the user.

Pole-dancing robot STRIPPERS that ‘stole the show’ at a Las Vegas club are headed to NYC this weekend

Technology companies have ‘disrupted’ the way we shop, communicate, travel and do just about everything in our daily lives.

Now, a pair of robot strippers have their eyes set on disrupting your next lap dance.

The android adult entertainers, dubbed the ‘Robo Twins,’ grabbed headlines earlier this month when they performed in a Las Vegas gentleman’s club at the 2018 Consumer Electronics Show.

This weekend, the Robo Twins are making an appearance in New York City when they perform Friday and Saturday night at the Sapphire 39 strip club in midtown Manhattan.

The famous ‘Robo Twins’ made their first US appearance in Las Vegas at CES 2018. This weekend, they’ll be on stage at an NYC strip club in midtown Manhattan.

HOW CAN YOU GET A LOOK AT THE FAMOUS ROBOT STRIPPERS?

The ‘Robo Twins’ grabbed headlines earlier this month when they performed in Las Vegas at the 2018 Consumer Electronics Show

Now, the android adult entertainers will perform at the Sapphire 39 strip club in Midtown Manhattan

You can check out the Robo Twins on Friday and Saturday night

To get in, attendees have to pay a $30 cover

It’s the last chance to check out the robot strippers before they head back to London

The robots were created by British artist Giles Walker, whose pieces typically comment on social issues

A $30 cover will gain you entrance to see the sexy robots, which allure onlookers with their gyrating hips and smooth moves sliding up and down stripper poles.

The event description says it will be the Robo Twins’ ‘only New York appearance’ before they ‘go across the pond’ to London.

The Robo Twins were invited to the New York club after they ‘created a lot of interesting debate over in Las Vegas during the CES event at the beginning of the month,’ British artist Giles Walker, who created the robotic strippers, explained in an email to Dailymail.com.

‘They kind of stole the show down there in fact.’

Don’t try to slip the cyborg strippers some Bitcoin, however.

‘The Robo Twins prefer cash or credit due to the volatility of cryptocurrency,’ Sapphire said in a tweet on Friday.

The pole-dancing robots are made of scrap parts from mannequins, car parts and other rubbish.

Each of Walker’s works of art are aimed at commenting on social issues.

British artist Giles Walker’s robot strippers are made out of rubbish like scrap mannequins and car parts. They appeared alongside real dancers at the 2018 Consumer Electronics Show

At first glance, it may seem like the robots are are supposed to be a commentary on robotics or job automation.

However, the robots are actually a comment on the nature of surveillance, power and voyeurism.

The robots CCTV heads are a testament to how surveillance cameras seem to be located all around us, especially in Britain.

At the time they were devised, ‘Britain was quickly becoming the most watched society in the world with these ‘mechanical peeping Toms’ appearing on every street corner,’ Walker told Dailymail.com.

‘We were told that it was for our safety and to reduce crime…when statistics prove that street lighting is actually a much more effective way of keeping us safe.’

Meet the robots set to help tourists and clean at Korean airport

This, combined with the bizarre terminology used to ‘sex-up’ current events at the time led Walker to the idea of doing the same for a CCTV, he explained.

But the artist also took plenty of inspiration from real strippers.

The Robo Twins sport svelte bodies, towering heels and a lacy garter round their leg that holds fake money.

At the foot of the stripper poles, there are tip buckets with cheeky sayings like ‘MIT bound’ or ‘Need $$$ for batteries.’

The robots were originally created in 2012 by Walker for a show called ‘Peepshow.’

The Robo Twins also performed at adult exhibition Sexpo in Melbourne in 2016.

Five children have been given a new EAR made from their own cells using a world first technique

Five children have been grown a new ear made from their own cells in a world first trial.

Chinese researchers conducted the groundbreaking experiment on children with microtia – when the ear is underdeveloped.

The young patients’ own ear cartilage cells, obtained from their other ear, were then used to form a new one in the landmark trial.

Striking pictures reveal how the ‘very exciting’ technique has worked, helping the children to have newly-shaped ears.

Microtia, which strikes between one in 6,000 and 12,000 births, can often cause hearing difficulties.

Conventional treatments for the condition revolve around synthetic ears or using cartilage taken from the child’s ribs.

The new treatment, pioneered by Guangdong Zhou at Jiao Tong University in Shanghai, offers hope of an easier method, New Scientist reports.

It involves taking a CT scan of the patient’s healthy ear to then create a 3D-printed replica, which is mirrored to represent their affected ear.

A mould is then made, which is littered with tiny holes, and filled with materials that degrade within the body.

A small sample of cells that make ear cartilage are then taken from the patient’s underdeveloped ear and used to fill the holes.

Over the space of 12 weeks, the cells begin to grow in the shape of the mould and the other materials in the mould begin to degrade.

The treatment also involves placing a ’tissue expander’ underneath the skin of the affected ear. This helps stretch the skin.

By the time the process is nearing the end, this has created a flap of skin that the newly created ear structure is implanted into.

However, it is unclear how long it will take for the entire treatment to finish, and the researchers will monitor each of the five patients for five years.

The first patient in the study underwent the experiment two and a half years ago, and it has been a success for her.

Researchers said similar results have been recorded for the other four, but some of the new ears have slightly distorted.

The findings, published in the journal EBioMedicine, have been welcomed by experts across the world, with some referring to it as ‘quite an achievement’.

Dr Tessa Hadlock, from the Massachusetts Eye and Ear Infirmary, described it as a ‘very exciting approach’.

Speaking to New Scientist, she said: ‘They’ve shown that it is possible to get close to restoring the ear structure.’

But the new technique needs to create better-looking ears than those that are made using conventional treatments before it is approved in clinics – which it has yet to do.

The process is similar to that used in an experiment in the 1990s at Massachussetts General Hospital to create the ‘Vacanti mouse’.

The mouse had a human ear growing on its back, which sparked furore among animal rights campaigners and religious groups.

‘Udelv’ self-driving delivery vans that bring groceries to your home are tested on public roads in California for the first time

A self-driving van designed to deliver groceries to your home has hit the streets of California for the first time.

The distinctive orange vehicles are capable of making around 40 deliveries, at a top speed of 25 miles per hour (40 km/h).

They are designed to address the ‘last mile’ problem of shipping, which is the most difficult for businesses to automate.

If testing is successful, they could become a common sight on streets around the world in the years to come.

The vehicle, built by Burlingame company Udelv, completed a 2.5-mile (4 km) loop from Draeger’s Market in San Mateo to two nearby customers.

The route included traffic lights, lane changes, intersections without signals and the two delivery stops.

A safety driver was on-board during the test mode demonstration, in line with the state’s laws on automated vehicles, and will remain in place as testing continues throughout February.

Udelv can also monitor and control the vehicles remotely, allowing it to override its automated systems and give a human operator control if needed.

The vehicles are equipped with level-four autonomous driving capabilities, according to their creators, which means they can cope with most situations automatically, but may struggle with some weather and road conditions.

Daniel Laury, the firm’s CEO, said: ‘Deliveries are the perfect first application for autonomous vehicles.

‘This is a historic revolution in transportation. We are reinventing deliveries.

‘McKinsey estimates that 80 per cent of all package deliveries will be autonomous in the next decade.

‘I am very proud that Udelv is first and leads this revolution.’

The Udelv vehicle is fully electric and features 18 secure cargo compartments of varying sizes, each equipped with automatic doors.

Customers open the locker with a press of a button on their smartphone or tablet and the vehicle heads on its way to the next delivery or back to the store.

The vehicle can drive for up to 60 miles (95 km) before needing to recharge, carrying up to 700 pounds (320 kg) of cargo.

A dedicated app is already available on iOS to track and reschedule deliveries, with an Android version to be released soon.

Udelv is now operating in the area around the test route, where Draeger’s customers can book delivery within a one or two hour window of ordering.

It is planning to test dozens of the vehicles on the roads of a few other states in the near future.

Udelv is planning to use a subscription business model to roll out its vehicle fleet.

Udelv is not the first company to have come up with new options for automated delivery, with a number of firms racing to corner the market.

In recent days, a pair of ex-Google engineers unveiled a new self-driving van also designed to deliver groceries to your home.

Silicon valley startup Nuro.ai raised £65 million ($92 million) to create a working prototype of its ‘R1’ vehicle, which the company says will never seat a human inside.

The low-speed car is fitted with panels in its side that open up via an app to reveal its cargo, and Nuro claims it could have a road-legal fleet ready by 2022.

The smartphone app will give a code that pops open the vehicle’s side hatches so customers can fetch their items.

It will also let customers know when the vehicle is nearby so people know when to head outside for collection.

Nuro said it is even considering using facial-recognition cameras as part of its delivery process.

Back in August 2017, residents of Greenwich had access to the UK’s first self-driving delivery service.

Online supermarket Ocado successful completed trial deliveries using driverless vehicles on residential and semi-pedestrianised roads.

The electric CargoPod is a street-legal vehicle equipped with multiple sensors and cameras placed around the vehicle’s body to navigate safely through the streets.

The van can hold up to 282 pounds (128 kg) of groceries at a time.

The compartment doors light up upon arrival to indicate where a customer’s shopping is contained.

Customers need to greet the delivery vans themselves and carry their shopping inside.

Global temperatures could break through the 1.5°C Paris Agreement threshold within FIVE years and cause weather chaos, Met Office warns

Global temperatures could reach 1.5°C above pre-industrial levels in the next five years, the Met Office has warned.

This is above the threshold set by the Paris Agreement on climate change.

The agreement commits countries to holding temperatures to ‘well below’ 2°C above pre-industrial levels and to curb increases to 1.5°C.

If the Met Office’s prediction comes true, it could cause climate chaos on Earth.

Increased temperatures caused by fossil fuels can trigger extreme weather patterns, with cyclones, foods and droughts becoming increasingly common as a result.

Global temperatures could reach 1.5°C above pre-industrial levels in the next five years, the Met Office has warned.

This is above the threshold set by the Paris Agreement on climate change.

The agreement commits countries to holding temperatures to ‘well below’ 2°C above pre-industrial levels and to curb increases to 1.5°C.

If the Met Office’s prediction comes true, it could cause climate chaos on Earth.

Increased temperatures caused by fossil fuels can trigger extreme weather patterns, with cyclones, foods and droughts becoming increasingly common as a result.

Average temperatures around the world are likely to be more than 1°C higher than those seen in the pre-industrial era, measured as between 1850 and 1900, and could reach 1.5°C higher during the period 2018 and 2022.

There is also a small – around 10 per cent – chance that one of the next five years could see global temperatures soar to more than 1.5°C above 19th century levels, the Met Office said.

It is the first time such high values have been highlighted in the Met Office’s decade-long predictions, which are updated every year.

The new highs could come if there is a strong natural El Nino weather pattern in the Pacific, which pushes up world temperatures, which would combine with global warming caused by human activity such as burning fossil fuels.

The latest warning of rising temperatures comes after three years of record heat, with 2016 globally the hottest year on record, and 2017 the warmest without the added impact of an El Nino.

This year is not expected to see temperatures exceed 1.5°C over pre-industrial levels.

Professor Stephen Belcher, chief scientist at the Met Office, said: ‘Given that we’ve seen global average temperatures around 1°C above pre-industrial levels over the last three years, it is now possible that continued warming from greenhouse gases along with natural variability could combine so we temporarily exceed 1.5°C in the next five years.’

The Paris Agreement’s more stringent 1.5°C limit was introduced into the global deal because some countries including low-lying island states say it is necessary to curb temperature rises for their very survival.

It relates to the global climate reaching such a level over a long-term average period, rather than hitting a temporary high, the scientists said.

Professor Adam Scaife, head of long range prediction at the Met Office, said: ‘These predictions show that 1.5°C events are now looming over the horizon, but the global pattern of heat would be different to more sustained exceeding of the Paris 1.5°C threshold.

‘Early, temporary excursions above this level are likely to coincide with a large El Nino event in the Pacific.’

But he added that continued greenhouse gas emissions leading to further warming would mean a greater chance of seeing years with temperatures of 1.5°C or more above pre-industrial levels in future years.

Meeting either of the temperature limits in the Paris Agreement, which all countries in the world are currently signed up to, would require emissions to fall to net zero by the second half of the 21st century.

El Nino years happen when a change in prevailing winds cause huge areas of water to heat up in the Pacific, leading to elevated temperatures worldwide.

Including El Nino years, 2016 was warmer and 2017 was joint second warmest with 2015.

The main contributor to rising temperatures over the last 150 years is human activity, scientists have said.

This includes burning fossil fuels which puts heat-trapping greenhouse gases into the atmosphere.

They say man-made climate change is has now overtaken the influence of natural trends on the climate.

Experts say the 2017 record temperature ‘should focus the minds of world leaders’ on ‘scale and urgency’ of the risks of climate change.

Speaking last month, Dr Colin Morice, of the Met Office Hadley Centre, said: ‘The global temperature figures for 2017 are in agreement with other centres around the world that 2017 is one of the three warmest years and the warmest year since 1850 without the influence of El Nino.

‘In addition to the continuing sizeable contribution from the release of greenhouse gases, 2015 and 2016 were boosted by the effect of a strong El Nino, which straddled both years.

‘However, 2017 is notable because the high temperatures continued despite the absence of El Nino and the onset of its cool counterpart, La Nina.’

The El Nino event spanning 2015 to 2016 contributed around 0.2°C (0.36°F) to the annual average increase for 2016, which was about 1.1°C (2°F) than average temperatures measured from 1850 to 1900.

The regional variations in temperature are themselves informative in understanding the mechanisms that cause warming in response to the continuing build up of greenhouse gases in the atmosphere.

Professor Tim Osborn, director of research at the University of East Anglia’s climatic research unit added: ‘It isn’t only the average global temperature that matters, we can also explain the geographical pattern of the warming.

‘Greater warming over land and in the Arctic region, and less warming in the sub-polar regions, are what we expect from our understanding of climate physics, and this is what we observe.’

The World Meteorological Organisation (WMO), which brings together five leading international datasets, said temperatures were on the rise over the long term.

WMO secretary-general Petteri Taalas said: ‘The long-term temperature trend is far more important than the ranking of individual years, and that trend is an upward one.

‘Seventeen of the 18 warmest years on record have all been during this century, and the degree of warming during the past three years has been exceptional.

‘Arctic warmth has been especially pronounced and this will have profound and long-lasting repercussions on sea levels, and on weather patterns in other parts of the world.’

And he said 2017’s warm temperatures were accompanied by extreme weather in many countries around the world.

The US had its most expensive year ever in terms of weather and climate disasters, while other countries saw their development slowed or reversed by tropical cyclones, floods and drought, he said.

President Donald Trump has announced his intention to pull the US out of the Paris Agreement, the world’s first comprehensive deal on cutting greenhouse emissions, which would leave the US as the only country not signed up to the treaty.

 

Climate change expert Bob Ward from the London School of Economics and Political Science, who was not involved in the study, said: ‘This record warm year has also been accompanied by exceptional extreme weather events around the world, including devastating hurricanes in the Caribbean and United States.

‘All countries are exposed to the growing impacts of climate change.

‘This year governments are due to start the process of assessing the size of the gap between their collective ambitions for reducing greenhouse gas emissions and the goals of the Paris Agreement.

‘The record temperature should focus the minds of world leaders, including President Trump, on the scale and urgency of the risks that people, rich and poor, face around the world from climate change.’

Uncertainties arising from incomplete global coverage, particularly a lack of observations from polar regions, and limitations of the measurements used to produce the data sets, have been included in the calculations.

Differences between the various estimates arise largely from the way that the data-sparse polar regions are handled.

Uber, Lyft and others pledge to improve urban transportation

With the rise of ride-sharing, alternative fuels and ongoing developments in autonomous vehicle technology, transportation is in the midst of a rather drastic transformation, and how we get around in the not too distant future is likely to be very different than how we get around today. But with so many companies working towards a new transportation future, things could get a little messy. To address that concern, over a dozen companies have now committed to 10 Shared Mobility Principles for Livable Cities, a pledge initiated by Zipcar cofounder Robin Chase.

Among the signatories are Lyft, LimeBike, Uber, Zipcar, Ofo, Mobike and Ola and the Shared Mobility Principles website says, “The future of mobility in cities is multimodal and integrated. When vehicles are used, they should be right-sized, shared, and zero emission.” The principles include some expected goals like promoting equity, engaging with stakeholders and transitioning towards renewable energy. But others paint a collaborative picture that may come as a bit of a surprise. The tenth principle, for example, states, “We support that autonomous vehicles in dense urban areas should be operated only in shared fleets. Shared fleets can provide more affordable access to all, maximize public safety and emissions benefits, ensure that maintenance and software upgrades are managed by professionals, and actualize the promise of reductions in vehicles, parking, and congestion, in line with broader policy trends to reduce the use of personal cars in dense urban areas.”

During a press call yesterday, Joseph Okpaku, Lyft’s vice president of government relations said, “We definitely do envision a future where the vast majority of autonomous vehicle rides will be done as part of a shared network. We think that’s the best way to realize all of the benefits that an autonomous future can bring in terms of rebuilding our cities.” Other principles include goals involving open data, fair user fees, prioritizing people over vehicles and planning cities and transportation alongside each other.

“Transportation is really a gateway to opportunity and cities really have to be places where you want to live, work, and play,” said Chase. “These companies have taken an incredibly bold step by supporting these principles.”

Google backs initiative to create ‘universal stylus’ pen that can draw on almost ANY device (but Apple and Microsoft have yet to join the project)

  • Google and 3M are the latest tech firms to support the Universal Stylus Initiative
  • The Universal Stylus Initiative aims to persuade tech companies to create penlike devices that can be used across all types of tablets and computers
  • So far, the group counts Intel, LG, Dell and tablet maker Wacom as members
  • Despite this, Apple, Microsoft and Samsung aren’t supporters of the initiative

Google and 3M just became the latest tech giants to join an initiative aiming to create a stylus that can write or draw on almost any device.

The project, called the Universal Stylus Initiative (USI), was created in 2014 and already counts tech bigwigs Intel, LG, Dell and tablet maker Wacom as members, among other companies.

With the addition of Google, there are now 30 companies signed onto the effort.

This healthy mix…points toward the growing strength of the active stylus ecosystem worldwide,’ the organization said on Tuesday in a statement.

Despite USI’s growing popularity, Apple, Microsoft and Samsung still aren’t a part of the group.

USI wants to create a standard stylus design that manufacturers can use to create pens that are compatible with touch screen devices from different gadget makers, such as tablets and computers.

If tech firms create styluses that use the same specifications, a consumer could buy one stylus and theoretically use it on both a Dell laptop and a Google Pixelbook.

Styluses typically contain sensors that detect pressure, movement and the orientation of the device, BBC noted.

The USI standard recognizes 4,096 levels of pressure sensitivity.

With a standardized design, USI hopes that each stylus will be able to store a user’s settings, like ink color and style, while being able to switch to a less noisy frequency to prevent interference

The stylus would also be equipped to work even if a friend is drawing with a stylus on the same touchscreen device.

Many tech giants already produce their own proprietary styluses.

Earlier this year, Google released the Pixelbook Pen, which works with the company’s line of Pixelbook laptops.

It seems possible that future Pixel devices and Chromebooks will be built using the USI standard.

Microsoft and Apple also have their own styluses.

The iPhone maker released its Apple Pencil for the iPad Pro in November 2015, while Microsoft’s Surface Pen became available in mid-2017.

Samsung has released styluses along with several of its flagship smartphones over the past few years.

The Korean gadget maker’s infamous, exploding Galaxy Note 7 came with its own pen, as did prior Samsung smartphones.

Kenichiro Yoshida will serve as Sony’s new CEO

Sony is getting a new CEO after it announced that CFO Kenichiro Yoshida will replace Kazuo Hirai as the head of the Japanese firm.

The move will happen April 1, with Hirai shifting to the role of Chairman.

“I have dedicated myself to transforming the company and enhancing its profitability, and am very proud that now, in the third and final year of our current mid-range corporate plan, we are expecting to exceed our financial targets,” Hirai said in a statement.

“As the company approaches a crucial juncture, when we will embark on a new mid-range plan, I consider this to be the ideal time to pass the baton of leadership to new management, for the future of Sony and also for myself to embark on a new chapter in my life,” he added.

Hirai took the CEO role in 2012 and he has worked in partnership with Yoshida to turn things around in recent years. Among its key initiatives, Sony downsized its loss-making mobile division with layoffs and a more focused set of products, while the PS4 has been a huge financial success. The firm also placed more focus on components, moved into AI, and Hirai personally oversaw the appointment of former Fox exec Tony Vinciquerra as Sony Pictures’ new CEO.

The company reveals its latest financial report today so we may get more information on its plans.

Tesla loafer 546 million US dollars: car rental as a guarantee

Tesla is using its car leases as collateral for a big $546 million loan as it turns to debt markets to raise additional cash to combat the blistering burn rate of its auto and energy business, according to multiple reports.

The bonds are pegged to leases of its Model S and X cars, and it marks the first time that Elon’s electric albatross has turned to asset backed securities for new money(Tesla has already tapped public markets, junk bond markets, and convertible-bonds to get additional capital).

All of this is driven by the company’s scorching burn rate. A report from Bloombergciting Barclays Plc analyst Brian Johnson said Tesla could spend $4.2 billion this year.

Tesla’s cash crunch can be blamed on the company’s continued delays and cost overruns associated with the production of the Model 3, Tesla’s low-end electric vehicle (priced to sell at $35,000).

Investors in Tesla’s asset-backed bonds make money off of the lease payments of the vehicles and then on the resale value of the car. According to Bloomberg some analysts have cautioned that the EVs might not have the same high resale value as cars, noting there’s not much of a track record for reselling EVs.

While this may be Tesla’s first push into asset backed securities, it’s not the only non-traditional path Musk has pursued to raise cash for his businesses.

The Boring Co., his other other business which plans to build tunnels under cities to move more cars around, sold $20 million dollars worth of flamethrowers roofing torches.

Airbus autopilot Vahana first test flight

Seems like just yesterday Airbus’ Vahana autonomous electric vertical take-off and landing (VTOL) craft was little more than a painted concept, but now it’s actually flown, during a full-scale prototype test that lasted just under a minute, and during which the Vahana aircraft was fully self-piloted and fling at a height of 16 feet off the ground.

The Vahana VTOL, which resembles a complicated helicopter or an overground drone, depending on your perspective, is being developed by Airbus Silicon Valley skunkworks A³, and is aiming to eventually become something that can actually offer service to customers and transport people and goods within cities, cutting above traffic and making short-hop trips between strategically placed launch and landing pads.

This first flight is obviously a far cry from a working, commercial passenger drone service, but the successful first flight, which was followed by a second successful flight the next day, is a step in the right direction.

Next, Vahana says it’ll aim to move from being able to hover the vehicle, to being able to have it fly itself directionally, which will obviously be a key ingredient in terms of getting people and stuff from point A to point B.