Coronavirus infection around the world has led to significant reduction in air pollution levels
The coronavirus all over the world has led to a significant reduction in air pollution levels, according to European Space Agency satellite photos.
Data from the European Space Agency's satellite show that nitrogen dioxide (NO2) levels in many cities in Asia and Europe have been significantly lower in the last six weeks than in the same period of the last year.
According to Professor Paul Monks this was an important lesson for the future.
One of the biggest declines in pollution can be seen in China's Uhan, where the coronavirus epidemic broke out in January. Hundreds of factories operate in the city of 11 million, supplying automotive parts and other equipment worldwide.
According to NASA, nitrogen dioxide levels in eastern and central China have fallen by 10-30 percent over the past few weeks.
Armenian scientist wins Marie Sklodowska-Curry personal scholarship for the first time in Armenia
For the first time in Armenia, Armenian scientist, plant geneticist Anna Nebish has won the EU Horizon 2020 Marie Sklodowska-Curry Individual Scholarship. Anna Nebish is involved in grape genetics research at the Institute of Molecular Biology of NAS RA and YSU Chair of Genetics and Oncology. Thanks to this scholarship, Anna Nebish will to discover new genes. New comprehensive studies are planned to be carried out at the Institute of Grape and Wine Research in the Spanish city of Logronio. The results will allow to obtain new varieties of grapes not through the influence of chemicals, but through grape breeding and genetic engineering, which will offer innovative solutions for the development of viticulture and winemaking.
A novel artificial intelligence system that predicts air pollution levels
A team of Loughborough University computer scientists are hoping to help eradicate this fear with a new artificial intelligence (AI) system they have developed that can predict air pollution levels hours in advance.
The technology is novel for a number of reasons, one being that it has the potential to provide new insight into the environmental factors that have significant impacts on air pollution levels.
Professor Qinggang Meng and Dr. Baihua Li are leading the project which is focused on using AI to predict PM2.5—particulate matter of less than 2.5 microns (10-6 m) in diameter—that is often characterized as reduced visibility in cities and hazy-looking air when levels are high.
Particulate matter is a type of air pollutant and it is the pollutant with the strongest evidence for public health concern.
This is because the particles are so small they can easily get into the lungs and then the bloodstream, resulting in cardiovascular, cerebrovascular and respiratory impacts.
There are systems that already exist that can predict PM2.5 but Loughborough University's research looks to take the technology to the next level.
The system the researchers have developed is novel for the following aspects:
- It predicts PM2.5 levels in advance—giving predictions for the levels in one hour to several hours' time, plus 1-2 days ahead
- It interprets the various factors and data used for prediction, which could lead to a better understanding of the weather, seasonal and environmental factors that can impact PM2.5
- It doesn't just predict one figure; it predicts the PM2.5 level plus a range of values the air pollution reading could fall within—known as 'uncertainty analysis'
- It has the capabilities to be used as an air pollution analysis tool in a carbon credit trading system.
The system's uncertainty analysis and ability to understand factors that affect PM2.5 are particularly important as this will allow potential end-users, policymakers and scientists to better understand related causes of PM2.5 and how reliable the prediction is.
Dr. Yuanlin Li is the Research Associate working on the project at Loughborough University. The LU team created the system using machine learning—a type of artificial intelligence technology that uses large amounts of data to learn rules and features, so a system can make predictions.
The researchers used public historical data on air pollution in Beijing to train and test the algorithms; China was selected as the focus as 145 of 161 Chinese cities have serious air pollution problems.
The developed system will now be tested on live data captured by sensors deployed in Shenzhen, China.
The system developed at Loughborough University is part of a wider research project funded by the Newton Fund, which has four partners: Satoshi Systems Ltd, Loughborough University, Shenzhen Institutes of Advanced Technology, and EEG Smart Intelligent Technology in China.
The aim of the project is to explore how carbon can be used as a tradeable commodity to establish a new effective economic leverage for controlling emissions.
It is envisaged that cities, regions and factories will be given credits for how much carbon they can emit and if they go over it must 'buy' more credits. Alternatively, if a location falls under its limit, it can sell the surplus credits on the carbon market for a profit.
The aim is to integrate Loughborough University's PM2.5 prediction model onto an online platform that can be accessed by participants of the carbon trading scheme.
This will allow participants to use the system to access real-time, meaningful information on pollution levels that will aid them with designing a trading strategy.
A team of Rice University engineers has introduced the first neural implant that can be both programmed and charged remotely with a magnetic field.
Their breakthrough may make possible imbedded devices like a spinal cord-stimulating unit with a battery-powered magnetic transmitter on a wearable belt.
The integrated microsystem, called MagNI (for magnetoelectric neural implant), incorporates magnetoelectric transducers. These allow the chip to harvest power from an alternating magnetic field outside the body.
The system was developed by Kaiyuan Yang, an assistant professor of electrical and computer engineering; Jacob Robinson, an associate professor of electrical and computer engineering and bioengineering; and co-lead authors Zhanghao Yu, a graduate student, and graduate student Joshua Chen, all at Rice's Brown School of Engineering.
MagNI targets applications that require programmable, electrical stimulation of neurons, for instance to help patients with epilepsy or Parkinson's disease.
"This is the first demonstration that you can use a magnetic field to power an implant and also to program the implant," Yang said. "By integrating magnetoelectric transducers with CMOS (complementary metal-oxide semiconductor) technologies, we provide a bioelectronic platform for many applications. CMOS is powerful, efficient and cheap for sensing and signal processing tasks."
He said MagNI has clear advantages over current stimulation methods, including ultrasound, electromagnetic radiation, inductive coupling and optical technologies.
"People have been demonstrating neural stimulators on this scale, and even smaller," Yang said. "The magnetoelectric effect we use has many benefits over mainstream methods for power and data transfer."
He said tissues do not absorb magnetic fields as they do other types of signals, and will not heat tissues like electromagnetic and optical radiation or inductive coupling. "Ultrasound doesn't have the heating issue but the waves are reflected at interfaces between different mediums, like hair and skin or bones and other muscle."
Because the magnetic field also transmits control signals, Yang said MagNI is also "calibration free and robust."
"It doesn't require any internal voltage or timing reference," he said.
Components of the prototype device sit on a flexible polyimide substrate with only three components: a 2-by-4-millimeter magnetoelectric film that converts the magnetic field to an electric field, a CMOS chip and a capacitor to temporarily store energy.
The team successfully tested the chip's long-term reliability by soaking it in a solution and testing in air and jellylike agar, which emulates the environment of tissues.
The researchers also validated the technology by exciting Hydra vulgaris, a tiny octopuslike creature studied by Robinson's lab. By constraining hydra with the lab's microfluidic devices, they were able to see fluorescent signals associated with contractions in the creatures triggered by contact with the chips. The team is currently performing in-vivo tests of the device on different models.
Charge batteries through skin with permanent implantable device concept
The team has developed a way to remotely charge a battery, such as that in a pacemaker, using a soft, biocompatible material that absorbs sound waves passed through the body.
Soft and flexible materials can ultrasonically charge bioelectronic implants, which could help to reduce the need for surgical treatment.
Electronic devices are increasingly used to remedy serious and long-term health problems, such as pacemakers to regulate heartbeat, electronic pumps that release insulin, and implantable hearing aids. Key design considerations for these components aim to minimize size and weight for patient comfort, and they ensure that the device is not toxic to the body.
Another stumbling block is how to power the devices. Batteries keep them working for a while, but changing the batteries demands invasive surgery. Ideally, the power source needs to be recharged wirelessly.
Avoiding bias and misinformation online
The last thirty years have witnessed a technological and cultural revolution related to the notions of information and knowledge generation, sharing, and access. The birth and progressive evolution of the World Wide Web (WWW) have led to the availability of a massive and distributed repository of heterogeneous data and potential information, openly available to everyone. This phenomenon has been further emphasised by the conception and implementation of Web 2.0 technologies, which allow every user to generate content and share it directly with peers through social media, without almost any traditional form of intermediate trusted control.
The availability of an enormous and intangible world of potential answers to a multiplicity of information needs, has motivated a large wealth of research aimed at defining and developing effective and efficient systems capable to timely provide the right information to users, offering them a support for easily finding a path in the intricate forest of the WWW. Among these, search engines and recommender systems constitute nowadays two prominent categories of systems that address these issues. Huge efforts have been made over the last years to improve the performance of these systems, by increasingly accounting for the notion of context and by leveraging the user-systems interactions in an attempt to automatically learn the real user context and dynamically adapt to it.
To this purpose, a wide range of techniques have been exploited, including machine learning and various other techniques falling under the umbrella of Artificial Intelligence. From recent years, search engines can process multimedia contents and capture some elements of the user context to tailor the search outcome to each specific user, thus overcoming the ‘one size fits all’ search paradigm. Moreover, the conversational search paradigm has been introduced to offer users a human-based dialogue interface, which can ease the user-system interaction and provide better, more precise, and more relevant information through dialogues. In the domain of recommender systems, where personalisation is a core concept, the role of context has also been recognised and is increasingly considered by the research community.
The idea and foundations of the Semantic Web have promised a further step towards offering a semantic structure to the WWW wealth of data and information. The availability of data and content related to various knowledge domains has motivated several attempts to provide formal languages and technologies for representing (domain-specific) knowledge, as well as for reasoning with it, with the perspective of providing users with structured knowledge representation and management; this offers a means to fill their knowledge gaps with a better automated support. Moreover, on top of the WWW some human-generated resources (such as Wikipedia and linked open data) have made it possible to provide structured content (knowledge) that can be easily accessed, also through more traditional search systems.
Xiaomi Gets Patent for Smart Masks
Chinese company Xiaomi has been granted a patent for smart masks that would come equipped with sensors and a chip to gather real-time data collection about the air you breathe. The sensor would record data such as total wearing time, pollution absorption, breathing volume and breath counts. The design consists of a pollutant filter and a sensor that records how long a person has been wearing the mask.
According to a report on the Abacus website, the US Patent and Trademark Office has granted Xiaomi the patent for a "smart mask" design that was filed in June 2016. There is also a built-in battery that powers the standard air filter. The smart mask will also come with sensors like accelerometers and gyroscopes.
The collected data will be stored on the mask using the storage module and can also be transferred to other devices, thanks to the connection module. Xiaomi already sells generic masks to ward off air pollution.
Scientists Invent Device to Generate Electricity From Rain
A team of engineers has figured out how to take a single drop of rain and use it to generate a powerful flash of electricity.
The City University of Hong Kong researchers behind the device, which they’re calling a droplet-based electricity generator (DEG) say that a single rain droplet can briefly generate 140 volts. That was enough to briefly power 100 small lightbulbs and, while it’s not yet practical enough for everyday use, it’s a promising step toward a new form of renewable electricity.
The material the device is made from contains a quasi-permanent electrical charge, and the rain is merely what triggers the flow of energy.
A New Facility Is Set to Produce Oxygen From Moon Dust
Although the Moon has no atmosphere, it has loads of oxygen, all mixed up with the dust on the lunar surface in the form of oxides.
Last year, scientists published a paper on how to extract it from a Moon dust (regolith) simulant; now, the first prototype oxygen plant is going to be attempting that extraction on a larger scale.
The facility, set up at the European Space Agency's European Space Research and Technology Centre in the Netherlands, will use the technique developed by Lomax and her colleagues.
We know, based on returned samples of lunar regolith - the loose dust, rocks and dirt on the surface of the Moon - that oxygen is actually really abundant in this material. Between 40 and 45 percent of the regolith by weight is oxygen.
Using an exact copy of lunar regolith made on Earth called lunar regolith simulant, attempts have been made in the past to figure out how to extract the oxygen, with poor results - too complicated, too low-yield, or destructive of the regolith.
"Elixir of immortality" found in central China's ancient tomb
Archaeologists in central China's Henan Province said Friday that the liquid found in a bronze pot unearthed from a Western Han Dynasty (202 BC-8 AD) tomb is an "elixir of life" recorded in ancient Taoist literature.
About 3.5 liters of the liquid was excavated from the tomb of a noble family in the city of Luoyang last October. It was initially judged by archaeologists to be liquor as it gave off an alcohol aroma.
However, further lab research found that the liquid is mainly made up of potassium nitrate and alunite, the main ingredients of an immortality medicine mentioned in an ancient Taoist text, according to Pan Fusheng, leading archaeologist of the excavation project.
A large number of color-painted clay pots, jadeware and bronze artifacts were also unearthed from the tomb, which covers 210 square meters. The remains of the tomb occupant have also been preserved.
How are robots contributing to the fight against coronavirus?
Coronavirus has now reached more than 20 countries. The disease has yet to be declared a pandemic but the medtech industry is already stepping up with solutions to contain its spread. What Using a robot equipped with a camera, microphone and stethoscope, the patient has been able to consult with clinicians without coming into direct contact with them.
Providence Regional Medical Center chief of infectious diseases Dr George Diaz told “The nursing staff in the room move the robot around so we can see the patient in the screen, talk to him.”
This isn’t the only robot that’s being used to interact with quarantined people. A hotel in Hangzhou is being used to isolate more than 300 people suspected to have the virus, and has been using a robot to deliver food to their bedrooms. The hotel guests were on the same flight as travellers from Wuhan, and will remain in the hotel for two weeks as a precautionary measure. Multiple food delivery robots have been deployed on all 16 stories of the hotel.
Likewise in Guangzhou City, at the Guangdong Provincial People’s Hospital, autonomous delivery robots are being used to transport drugs around the hospital. The robots are loaded up with medicines and given instructions of where in the hospital to go to, and then head to their destination unaided. They’re able to open and close doors and take the lift without any human assistance.
One robot is able to carry out the delivery tasks of three people, making the entire drug delivery process faster and reducing the risk of clinical staff contracting 2019-nCoV and spreading it throughout the hospital.
Hong Kong scientists have developed a rapid coronavirus diagnostic device
Hong Kong scientists have developed a portable device that can quickly and effectively diagnose a new coronavirus infection in 2019-nCoV in just 40 minutes. The device has already begun to be used in a number of cities in China, including Hubei, which is in the center of the epidemic. "We have sent it to many places and hope that people will use it," said Ven Veitszyan from the Hong Kong University of Science and Technology, who analyzes samples of the organism's liquid effluents. Earlier this same group of Hong Kong researchers produced similar devices for the rapid diagnosis of bird flu and swine flu.
Mismanaged waste 'kills up to a million people a year globally
Mismanaged waste is causing hundreds of thousands of people to die each year in the developing world from easily preventable causes, and plastic waste is adding a new and dangerous dimension to the problem. Municipal waste frequently goes uncollected in poorer countries and its buildup fuels the spread of disease. Between 400,000 and 1 million people are dying as a result of such mismanaged waste, according to the charity Tearfund.
While mismanaged waste has been a problem for decades, the growth of plastic pollution, , which does not break down in the environment, is adding a fresh set of problems to an already dire situation. Plastic waste is blocking waterways and causing flooding, which in turn spreads waterborne diseases. When people burn the waste to get rid of it, it releases harmful toxins and causes air pollution.
Every second, a double-decker busload of plastic waste is burned or dumped in developing countries, the report found. When some plastics deteriorate, they can leach harmful chemicals into the environment and break down into microplastics, with effects that are still poorly understood and largely undocumented in poorer countries.
Can artificial intelligence be taught how to joke?
Lately, machines have been proving themselves as good as or even superior to people with certain tasks: they are already better in Go, Chess, and even Dota 2. Algorithms compose music and write poetry. Scientists and entrepreneurs estimate that artificial intelligence will greatly surpass human beings in the future.
A large part of what makes us human is our humor, and even though it’s believed that only humans can crack jokes, many scientists, engineers, and even regular people wonder: is it possible to teach AI how to joke?
Compared to composing music, it’s hard to describe what makes us laugh. Sometimes, we can hardly explain what exactly amuses us and why. Many researchers believe that a sense of humor is one of the last frontiers artificial intelligence needs to conquer to truly match human beings. Research shows us that a sense of humor started developing long ago in people in the course of mate selection.
This can be explained by the direct correlation between intellect and a sense of humor. Even today, someone’s sense of humor can show us the level of their intellect. The ability to joke requires such difficult skills as language proficiency and a keen range of vision. In fact, a good command of language is especially important for certain individual pools of humor (British, for example) based primarily on word play. All in all, teaching artificial intelligence how to joke is no easy task. Researchers from around the world are trying to teach artificial intelligence how to make jokes.
Software company founded by US-Armenian businessman secures $30m in venture capital giving product away
Evolution Media, the investment arm of the well-known and influential Creative Artists Agency (CAA), has made a $30 million investment in Epic!, a consumer-facing, five-year-old education technology company.
Epic! is a digital reading platform, an online library of sorts, for kids 12 and under. It boasts more than 35,000 books, audio books and videos from 250 publishers including big brand names such as Sesame Street and National Geographic.
And while the company is unquestionably consumer-oriented, selling a subscription service for $7.99 a month for unlimited access, Epic! gives the service away to teachers and schools. Making it free at school, for schools, definitely increased brand awareness, said Suren Markosian, co-founder and CEO. “Early on we knew that teachers using it, adapting it was important. And they are our most active audiences,” he said.
AUA and PicsArt announce the launch of the AI Lab
The American University of Armenia (AUA) and PicsArt have announced the collaborative launch of an Artificial Intelligence (AI) Lab that will employ faculty and students to conduct cutting-edge research in machine learning and computer vision. This offers AUA students the unique opportunity to gain research experience in addition to applied software engineering skills greatly valued by companies in the IT field. AUA and PicsArt have been working together to create a new model that will promote science and research while growing academic and professional capacity in the domain of AI.
“Artificial Intelligence is quickly evolving all over the world and I think it is the right time to set the scene here, in Armenia. We are very happy to launch the AI Lab in collaboration with PicsArt to enhance research in the field of AI. I am anticipating to see how this new initiative will take us a step forward into a center of excellence and surprise other countries,” noted AUA President Dr. Karin Markides. The AI Lab will employ two members of the AUA faculty, lead researchers, and about 15 undergraduate students from AUA’s Akian College of Science and Engineering (CSE) majoring in computer and data science. The students will be trained to conduct both applied and fundamental research in machine learning and computer vision. AUA professors and machine learning professionals from PicsArt will begin training in January 2020. After a six-week training course, the best performing students will be hired by the AI Lab. “I am really excited about this project for three main reasons: my background in AI, deep connection to AUA, and prospects for Armenia. PicsArt is all about making awesome and I hope that together with AUA we can make AI awesome in Armenia. I believe this is just the first step of our collaboration and we can do much more together,” noted Hovhannes Avoyan (M PSIA ’95), Founder of PicsArt Inc. and AUA Corporation Trustee. PicsArt believes in the potential of the AUA students and is excited to provide engineering students with the opportunity for continuous learning in an academic environment, while also solving real-world challenges, based on real data and collaboration with industry experts. As AI is a fast-growing domain, it is extremely important that undergraduates studying in this or other related fields get a high-quality education and gain advanced research skills that will make them competitive in the job market. The new AI Lab will allow students to explore immense opportunities in research; learn how to experiment with cutting-edge tools and technologies; receive advanced tailored training and mentorship from local and international faculty and industry experts. They will be able to apply their knowledge to real big data sets, and offer solutions for a globally leading application. The students will also get competitive compensation for work that enriches, deepens, and accelerates their learning experience at AUA. Both PicsArt and AUA believe there is immense untapped potential for collaboration between academia and industry. The AI Lab is one example of innovative models and processes that will increase mutual trust and greatly contribute to the value generation and human talent capacity development.
Russia, Japan join hands for lunar robot
Russian and Japanese companies are planning to jointly design a robot to operate on the lunar surface next year.
Russia's Tass News Agency reported that Android Technology Company from Russia and Japan's GITAI reached a tentative agreement when the Japanese company's representatives visited Russia last week.
"Colleagues from Japan are thinking in approximately the same direction as we do, eyeing step-by-step design of robotic systems to explore the near and far space," said Yevgeny Dudorov, executive director of Android Technology.
"We both identify the moon－or, in other words, robotic systems that could function and perform tasks on the moon surface－as our primary target," he said.
According to Dudorov, GITAI specializes in anthropomorphic robots and uses controllers similar to those developed by Android Technology.
The device allows one to operate robots in the "avatar" mode, during which it would mimic the actions of a human controller to perform certain manipulations.
"We will sign a cooperation agreement. Later, we will outline joint plans for 2020, 2021 and later," Dudorov said, adding that the deal with Japan would be signed soon.
Mass balance of the Greenland Ice Sheet from 1992 to 2018
In recent decades, the Greenland Ice Sheet has been a major contributor to global sea-level rise1,2, and it is expected to be so in the future. Although increases in glacier flow 4–6 and surface melting 7–9 have been driven by oceanic 10–12 and atmospheric 13,14 warming, the degree and trajectory of today’s imbalance remain uncertain. Here we compare and combine 26 individual satellite measurements of changes in the ice sheet’s volume, flow and gravitational potential to produce a reconciled estimate of its mass balance. Although the ice sheet was close to a state of balance in the 1990s, annual losses have risen since then, peaking at 335 ± 62 billion tonnes per year in 2011. In all, Greenland lost 3,800 ± 339 billion tonnes of ice between 1992 and 2018, causing the mean sea level to rise by 10.6 ± 0.9 millimetres. Using three regional climate models, we show that reduced surface mass balance has driven 1,971 ± 555 billion tonnes (52%) of the ice loss owing to increased meltwater runoff. The remaining 1,827 ± 538 billion tonnes (48%) of ice loss was due to increased glacier discharge, which rose from 41 ± 37 billion tonnes per year in the 1990s to 87 ± 25 billion tonnes per year since then. Between 2013 and 2017, the total rate of ice loss slowed to 217 ± 32 billion tonnes per year, on average, as atmospheric circulation favoured cooler conditions 15 and as ocean temperatures fell at the terminus of Jakobshavn Isbræ16. Cumulative ice losses from Greenland as a whole have been close to the IPCC’s predicted rates for their high-end climate warming scenario17, which forecast an additional 50 to 120 millimetres of global sea-leve l rise by 2100 when compared to their central estimate.
AI genome scanner says Denisovans could live until 38 years old
Artificial intelligence may be able to estimate the maximum lifespans of extinct species and early humans. The technique relies on analysing specific regions of DNA that are linked to ageing.
Benjamin Mayne at the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia and his colleagues built an AI to predict the lifespan of different animals. To do this, they first trained an AI on the known genomes of 252 species from five classes of animals, including mammals, reptiles and fish, and their maximum lifespans.
The AI then narrowed down almost 30,000 DNA regions to just 42 that related to lifespan. These were then used to create a formula that can convert them into a prediction of maximum lifespan.
The researchers tested the AI on some extinct species. It estimated that the woolly mammoth could live for up to 60 years and Denisovans, a mysterious extinct cousin of modern humans, could live for about 38 years.
The researchers also found that Pinta Island tortoises could live to be 120 years old. Lonesome George, the last known individual of the species, is estimated to have been more than 100 at death. And the oldest bowhead whale is thought to have lived to 211, but the model predicts the species could live to 268.
Ocean temperature reaches record high
The temperature of oceans hit a record high in 2019, the fifth consecutive year of reaching a record, according to a study published in Advances in Atmospheric Sciences. "The upward trend is relentless, and so we can say with confidence that most of the warming is man-made climate change," says Kevin Trenberth, a scientist at the National Center for Atmospheric Research.