Friday, October 18, 2024
Home Blog Page 4

Women are losing more money due to climate change than men

0

A UN report has shown that in womenrural areas face greater economic losses from the effects of climate change than men. Let’s understand the researchers‘ findings

What’s the trend?

Global warming is an increase in the average temperature on Earth, the main cause of which is human activity. It has been observed since the end of the 19th century, and its speed is constantly increasing. If warming is not stopped, the planet will face negative consequences: some coastal cities will be flooded and disappear, there will be many more hungry people, and wars will break out over vital resources.

The effects of climate change also negatively impact economic inequality between men and women. Therefore, the fight against global warming is a sustainable trend of our century.

Women are losing income

On March 5, 2024, analysts from the UN Food and Agriculture Organization (FAO) published the report “Climate Injustice.” It says that women in rural areas of the world suffer significantly greater economic losses from the effects of climate change than men. Their rural households lose about 8% more income due to heat waves and 3% more due to flooding.

FAO researchers analyzed socioeconomic data from more than 100,000 rural households and more than 950 million people in 24 poor countries. These figures were compared with continuous precipitation and temperature observations over the past 70 years. It turned out that on average, women lose about $37 billion annually from heat waves and about $16 billion from floods.

Reasons for loss of income

The FAO report shows that women take on additional workloads during extreme weather events compared to men. They are more likely to continue working when floods and droughts occur.

The main reasons for women’s greater vulnerability to financial losses from climate change are related to gender inequality and structural factors:

  • Women have less access to productive resources, assets, financial services, technology and knowledge. This reduces their ability to adapt to climate risks;
  • women are more likely to work in subsistence agriculture and are less likely to diversify sources of income;
  • Women bear the brunt of unpaid domestic work and care for family members. This limits their time and opportunities for adaptation, education and income generation.

Global losses

The FAO estimates that a 1°C increase in the Earth’s average temperature will cause female-headed households to lose about a third in income compared to male-headed households. In addition, the heat is forcing women and children in rural areas of the world to work an average of an hour more each week.

Lauren Phillips, FAO deputy director for gender equality and co-author of the study, said governments were failing to address the factors that disadvantage women and climate aid was not being targeted in a way that would close the gap between the sexes.

She emphasized that the report was the first to quantify this problem: “The gender gap can have a powerful impact on GDP growth. We can increase GDP by 1% globally if we improve food security for 45 million people by focusing on women.”

Women are affected more than men by the climate crisis, in part because its impacts exacerbate existing inequalities—unequal rights to land ownership and lack of economic opportunity.

The report joins a growing body of research showing that women and vulnerable populations are disproportionately affected by the climate crisis. The study also noted that older people face more negative consequences than younger people who have the opportunity to move.

Inflection AI: the company that created the ChatGPT analogue

0

Pi is not just a chatbot, but an interlocutor: this is how Inflection AI positions its product. The startup, which is only two years old, is already attracting large investments and collaborating with well-known corporations

Who created the company, Inflection AI?

Inflection AI was founded in March 2022. The startup positions itself as “an artificial intelligence studio creating personal AI for everyone” [1].

Inflection AI is led by three entrepreneurs who previously worked in other IT projects.

  • Mustafa Suleiman is the co-founder of the artificial intelligence company DeepMind. The company was subsequently acquired by Google. Suleiman left the corporation at the beginning of 2022 and started thinking about creating his own startup, which would develop algorithms with conversational interfaces.He is the CEO of Inflection AI.
  • Reid Hoffman is the creator of the business social network LinkedIn, one of the first investors in OpenAI. Co-founder and board member of Inflection AI.
  • Karen Simonyan is an Oxford graduate who developed his own startup, which was later acquired by DeepMind. Worked as chief scientist at DeepMind. At Inflection AI, he takes the same position.

Introducing Inflection, Reed Hoffman said, “Our mission was to create a tool that is designed to understand the user, not the other way around. We wanted it to not only answer questions, but also be part of a dialogue that helps you on your journey in life – whether it’s communicating with other people, making decisions, figuring out how to deal with a problem at work . ”

Key milestones in the history of Inflection AI

The startup launched in March 2022 with financial support from Microsoft, Nvidia, as well as billionaires Reed Hoffman (co-founder of LinkedIn), Bill Gates (co-founder of Microsoft) and Eric Schmidt (former CEO of Google). The startup raised $225 million in seed capital [3] .

In May 2023, Inflection AI launched a conversational chatbot called Pi, an analogue of ChatGPT [4] . The main difference from other chatbots is that Pi talks to the user and gradually accumulates knowledge about him based on conversations. Pi itself responded that it is not a chatbot, but “a conversational AI, which means it can have more natural and interesting conversations” [5] .

The Pi chatbot was launched based on the Inflection language model. In November 2023, the company announced that it had completed training of Inflection-2, “the best model in the world for its computing class and the second largest LLM (large language model) in the world today.” .

The company noted that the new model is more capable than the previous version: the developers have improved the style of answers and reasoning capabilities [6]. The latest update to date is the Inflection-2.5 model, which was introduced in March 2024 [7]. In terms of performance, it is as close as possible to the results of GPT-4, while requiring 40% less computing resources.

Work is also underway on the company’s flagship product, the Pi chatbot. So, in March 2024, the company announced that it became possible to send messages to a chatbot via iMessage, a messaging system on Apple devices. Today the company employs several dozen employees. The headquarters is located in the Californian city of Palo Alto.

future of the company

In June 2023, the startup, which was valued at $4 billion, attracted $1.3 billion in investments from Microsoft, Hoffman, Gates, Schmidt, and Nvidia. This further strengthened the startup’s ties with two key partners in the AI ​​race [8].

Microsoft provides cloud computing infrastructure, and Nvidia (a GPU developer) partners with CoreWeave (cloud services for solving AI problems) to integrate GPUs for LLM training.

Together with partners, Inflection AI is creating the world’s largest supercomputer from 22 thousand Nvidia H100 Tensor Core GPUs. Such computing power will make it possible to create a new generation of artificial intelligence models and train them quickly and efficiently.

Now the future supercomputer includes 3.5 thousand processors; it has already proven to be the fastest at training large language models [9]. With its latest major funding in June 2023, Inflection AI plans to continue expanding the supercomputer’s computing capabilities as well as developing its core product, Pi [10].

Laser communications and humanoids: technologies for deep space exploration

Humanity’s next ambitious mission is to land on the Moon and create a permanent base on the Earth’s satellite. At the same time, technologies are being developed that will make it possible to fly into deep space.

Leading countries are increasing their budgets to develop space technologies that will one day make it possible to fly to Mars and beyond. According to BIS Research, the size of this market will grow by more than 6% per year and reach $33.90 billion by 2030. Revolution Notes looked into what technologies would allow humanity to reach the solar system’s boundaries.

Super-heavy rockets with refueling

Space agencies are creating advanced rockets to transport manned missions into deep space. Thus, a heavy cargo ship called the Space Launch System (SLS) is being developed as part of NASA’s Artemis program. After a series of missions to the Moon, it will be able to be used for longer flights. Private players such as SpaceX and Virgin Galactic are also developing their rockets.

Elon Musk’s Starship super-heavy rocket is still being tested, but it has already been selected to carry astronauts to the Moon in 2026. Before that, SpaceX must not only demonstrate a successful launch but also prove that its Starship refueling system will operate reliably in space. It is assumed that versions of Starship modified for tankers will refuel the ship with its crew.

This will allow it to fly further than the Moon, including to Mars. “To achieve colonization of Mars within three decades, we need ship production to be 100 Starships per year, but ideally up to 300 per year,” Elon Musk said of the company’s goal.

goal

At the same time, NASA announced that it would use Blue Origin’s New Glenn rocket to launch a new mission to Mars. A two-stage heavy rocket will deliver two Photon probes to Mars at once. This will happen either in August 2024 or at the end of 2026. As part of the Escape and Plasma Acceleration and Dynamics Explorer (ESCAPADE) mission, the probes will study the interaction of the solar wind with the magnetosphere of Mars.

Blue Origin itself is working on the Blue Ring space tug project. It will be able to provide “space logistics and delivery” services from low-Earth orbit to cislunar space and beyond. In addition, the Blue Ring is refuelable and will allow it to deliver fuel to other spacecraft.

engines

Nuclear and detonation engines

A few years ago, NASA revived its nuclear program to develop a facility that could reach Mars in 100 days. Thus, together with the US Defense Advanced Research Projects Agency, DARPA (Defense Advanced Research Projects Agency) is developing nuclear thermal propulsion (NTP) for fast transit missions to Mars.

In an NTP system, a nuclear reactor generates heat that causes hydrogen or deuterium fuel to expand and then directs it through nozzles to create thrust.

At the same time, in 2023, as part of the NASA Innovative Advanced Concepts (NIAC) program, the agency also selected a concept for the development of such an engine. Developed by University of Florida hypersonics program manager Ryan Gosse, the setup uses a “wave rotor excitation cycle” that could cut the flight time to Mars by up to 45 days.

Nuclear electric propulsion (NEP) relies on a nuclear reactor with an ion engine. It generates an electromagnetic field that ionizes and accelerates an inert gas (such as xenon) to create thrust.

rockets

NASA also funded the startup Positron Dynamics, which developed a nuclear fission engine (NFE). It uses hot nuclear fission products to create thrust. Now the authors of the project are working to redirect fragments of nuclear decay in one direction, providing the necessary thrust for rockets.

In the meantime, NASA is planning the first Dragonfly mission to test the operation of a nuclear-powered quadcopter on Saturn’s moon Titan. He will study the composition of the sand on Saturn’s moon for two years to find out whether it contains organic compounds.

Unable to harness solar energy in Titan’s hazy atmosphere, Dragonfly will be powered by a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG). The mission is planned to launch in 2028.

In addition, development of rotating detonation rocket engines (RDMs) is underway. These are installations that use one or more detonation waves that continuously propagate along an annular channel. During detonation, combustion products expand at supersonic speed, which saves fuel and increases the power of the installation.

In December 2023, engineers at NASA’s Marshall Space Flight Center successfully tested a 3D printed VDD. According to testers, its characteristics meet the requirements for operation in deep space, when the spacecraft can take a course from the Moon to Mars. According to them, this will allow the creation of lightweight propulsion systems to send more payload mass into deep space.

Light sails and lasers to speed up ships

In order to send missions into deep space, we also need installations that can provide high thrust and constant acceleration of ships. Researchers are currently exploring the potential of focused arrays of lasers and light sails. Similar developments are being carried out by Breakthrough Starshot and Swarming Proxima Centauri.

In addition, a group from McGill University, at the request of NASA, has developed a 10-meter-wide laser that, on Earth, will heat hydrogen plasma in a chamber behind the spacecraft, creating thrust. It would also potentially allow missions to Mars to be sent in as little as 45 days.

The 100 MW Laser-Thermal Propulsion (LTP) installation will be able to provide energy to a spacecraft at a distance almost to the Moon. It heats the hydrogen fuel to a temperature of 10,000 K.

The lasers are focused into a hydrogen heating chamber, which is then discharged through a nozzle. Now the team is testing a more powerful setup. If the experiments are successful, deep space missions will take only a few weeks.

To support deep space missions, NASA is also working on a solar electric propulsion (SEP) project. The project is testing advanced technologies, including solar panels, high voltage power control and distribution systems, energy processing units and power system diagnostic systems. One of the latest developments being tested is a prototype of the Advanced Electric Propulsion System (AEPS).

This is a 12-kilowatt engine that uses a continuous stream of ionized xenon to create thrust. Such a system could potentially accelerate a spacecraft to extremely high speeds using relatively little fuel, allowing it to launch missions into deep space. The first three AEPS engines will be installed on the Gateway lunar station.

space

 

Communications and navigation in deep space

Before we start sending missions into deep space, we need to solve the problem of communication with them. In October 2023, NASA conducted the Deep Space Optical Communications (DSOC) experiment, which could change the way spacecraft communicate.

In the course of it, it was possible to send data using a laser to a distance greater than from the Earth to the Moon. As part of the experiment, a near-infrared laser beam was directed at the Hale Telescope at the Palomar Observatory of the California Institute of Technology in San Diego County from a distance of almost 16 million km. It contained encoded test data.

The DSOC system was placed on board the Psyche spacecraft. She transmitted the signal during the vehicle’s flight to the main asteroid belt between Mars and Jupiter. NASA notes that the laser is capable of transmitting data at a speed that is 10-100 times higher than the speed of radio wave systems, which are traditionally used in other missions.

As part of a demonstration of the system’s capabilities, the agency transmitted to Earth a photo from the constellation Pisces, as well as a 15-second video in high resolution. The maximum data transfer rate via the laser was 267 Mbit/s, the average was 62.5 Mbit/s. It took only 1.5 minutes to broadcast the video.

In February 2024, NASA also invited the private space industry to submit plans for missions to Mars, including deep space communications services. They must consider solutions for transmitting data to both ground-based and orbital stations.

In addition, the agency is trying to solve the problem of deep space navigation, which cannot be organized using traditional methods using satellites. In 2018, NASA developed Station Explorer autonomous navigation technology for X-ray timing and navigation technology (SEXTANT). It uses pulsars, or neutron stars, as guides.

Finally, NASA developed and tested an atomic clock for deep space (Deep Space Atomic Clock). They have already shown absolute accuracy , since they are only one second behind 10 thousand years.

In the future, this will make it possible to launch manned missions to the far corners of the solar system and beyond. An improved model of such an atomic clock (Deep Space Atomic Clock 2) is currently being developed. They plan to install them on the Veritas ship, which will go to Venus, create a complete topographic map of the planet and land

Space robotics

Space agencies are developing robotic landers that can explore other planets instead of humans or that will build habitable spaces for astronauts. NASA is working in partnership with US firms such as Astrobotic Technology, Masten Space Systems and Moon Express to develop lunar robotic modules.

In addition, the US space agency in January 2024 tested the capabilities of the ARMADAS robotic system, which can independently assemble to create various structures on other planets. Potentially, this will allow the creation of various infrastructure elements, such as solar power plants, communication towers and temporary buildings for the crew on other planets.

The system uses various types of robots, including ones that resemble small worms. They can collect, repair and redistribute materials for various structures. ARMADAS uses a small set of 3D building blocks, or voxels, made from composite materials to form a structure. Robots can work in orbit, on the surface of the moon, or on other planets even before humans arrive.

The system can be remotely programmed to create different types of objects. In the experiments, the robots built a shelter of hundreds of voxels in just over four days of work. NASA noted that if the system were sent to the Moon a year before people landed there, it would have time to build 12 similar shelters. Such mini-robots will be able to charge autonomously at stations or even receive energy wirelessly.

Finally, NASA is developing a humanoid robot, Valkyrie, for future missions to Mars. It began testing in Australia in 2023 to demonstrate the robot’s autonomous capabilities as well as its ability to perform tasks in challenging environments. Valkyrie will be used by Perth-based Woodside Energy to develop care technologies for unmanned and offshore energy assets.

It is expected that the Valkyrie’s capabilities will be used in the Artemis and other missions. Thus, the robot will be able to check the operation of the equipment and provide its maintenance. In the future, it could be used in space industries that would support long-term stays of astronauts on other planets.

How neural networks help in the fight against climate change

0

Artificial intelligence systems have been criticized for their high energy consumption. However, this technology also has the potential to provide solutions to combat the effects of climate change

What’s the trend?

Under the influence of anthropogenic factors, the concentration of greenhouse gases in the atmosphere is increasing. As a result, we are experiencing global warming – an increase in the average temperature on Earth, the main cause of which is human activity. It has been observed since the end of the 19th century, and its speed is constantly increasing.

If warming is not stopped, the planet will face negative consequences: some coastal cities will disappear, there will be many more starving people, and wars will break out over vital resources. Therefore, the fight against it is a stable trend of our century. AI will help humanity cope with global warming.

AI to combat global warming

To prevent the effects of climate change, a number of measures should be taken. These include identifying sources of harmful emissions, using renewable energy sources, forecasting floods, forest fires and others.

Researcher Lakshmi Babu Sahir from Anglia Ruskin University is studying exactly how AI can be used to combat global warming. It identifies four main areas: energy, transport, agriculture, and disaster forecasting.

Energy

AI can reduce the negative consequences of this area by more accurately forecasting supply and demand. Neural networks can identify patterns in how and when people use electricity. It is also possible to predict the amount of energy generated using clean sources such as solar panels and wind generators. This data can help you use electricity more efficiently.

By estimating the amount of energy obtained from green sources, AI is able to plan  the most “profitable” time for washing clothes or charging an electric car. On an industrial scale, AI will help power grid operators prevent and reduce power system failures.

Iranian scientists  used AI to predict a research center’s energy consumption. The occupancy of the building, its structure and local weather conditions were taken into account. Based on the data obtained, specialists developed algorithms that reduced energy consumption by 35%.

Transport

Transport accounts for about 20% of global CO 2 emissions . AI models can create eco-friendly travel options by advising drivers on routes with the least amount of congestion and continuous traffic. This will reduce exhaust emissions.

In addition, AI can also be useful for traveling in electric vehicles. In 2021, Swedish researchers proposed an AI-based system that plots routes for electric vehicles with minimal energy consumption. To do this, the system took into account characteristics such as vehicle speed and the location of charging points.

Agriculture

Several scientific papers have demonstrated that more efficient agricultural practices can reduce negative environmental impacts. Ai can develop a plan for the economical use of space and fertilizers.

A study by Stanford University scientists back in 2017 showed that advanced AI models can predict soybean yields at the county level. This result was made possible by using satellite imagery to analyze and track crop growth. Predicting crop failure will help plan alternative ways of purchasing food.

Disaster Management

AI is contributing to disaster forecasting and mitigation. For example, an AI model proposed by a group of international scientists in 2021  studied drone images to predict flood damage in the Indus River basin in Pakistan.

Forest fires release smoke particles into the atmosphere. This air pollution contributes to the greenhouse effect. Fires can be prevented using artificial intelligence technologies. Technology company Overstory AI uses AI to process satellite imagery and climate data. As a result, the project helps detect fires to quickly eliminate them.

 

Why is potassium now in short supply and what is the danger for agriculture?

0

The world’s soils are low on potassium, a key nutrient plants need to grow. This problem could lead to a global food crisis

What’s the trend?

The growth of plants requires not only nitrogen and phosphorus but also potassium.

This microelement improves productivity and improves product quality. At the same time, potassium deficiency is observed throughout the world. In the long term, this could negatively impact the global food system.

Why did potassium deficiency occur?

About 20% of agricultural land worldwide suffers from potassium deficiency. This problem is especially acute in Southeast Asia, Latin America and sub-Saharan Africa. For example, about 75% of soils in the rice fields of China and 66% of soils in the wheat belt of South Australia do not contain adequate amounts of potassium.

In India, its deficiency is already leading to lower yields. It may seem that the problem can be solved by simply adding more substance to the soil, but it is much more complicated.

Potassium is usually mined from potash (potassium carbonate), a crystalline mineral found in layers of underground rock. World reserves are concentrated in a few countries, so most other countries depend on imports. This leaves their food systems vulnerable to supply disruptions and rising prices.

Canada, Belarus, and Pakistan together have about 70% of the world’s potash fertilizer reserves. Together with China, these four countries produce 80% of global volume and dominate the $15 billion international potash market .

Price jumps

Prices for potash fertilizers are subject to fluctuations – there have been two large jumps since 2000. The first happened in 2009 as a consequence of the financial crisis – prices more than tripled . Despite the danger to food systems, governments have not taken action to protect against future shocks.

The second surge occurred in the early 2020s due to the pandemic, geopolitical conflicts and sanctions. By April 2022, potash fertilizers were six times more expensive than in January 2021. By 2024, prices have dropped slightly.

Harm to the environment

Potassium is extremely important for agricultural development. However, the extraction of this trace element has a significant impact on the environment. For 1 ton of extracted substance, there are about 3 tons of waste, which accumulate in the “salt mountains”.

Without proper management, they are washed by rain into rivers and groundwater, where they damage ecosystems. Researchers do not yet know the potential consequences of increasing the concentration of potassium fertilizers in the soil. Hypothetically, they could be toxic to many animal species.

Six measures of protection

A team of British and Spanish scientists in a study proposed six specific measures to protect against fluctuating potash prices and reduce the substance’s environmental impact. In their opinion, humanity should do the following:

  • analyze potassium reserves : a global assessment of the amount of the substance needs to be carried out, a process that would identify countries and regions at risk;
  • better predict price fluctuations : improve monitoring systems, develop an international reporting system for the production of potash fertilizers;
  • help locally : identify sufficient potassium levels for different countries and develop targeted recommendations for local farmers;
  • assess environmental consequences : summarize all data on environmental harm from potassium mining, especially for rivers and lakes;
  • develop a circular economy for potash : learn to renew potash resources in a reusable manner to reduce dependence on mining;
  • establish international cooperation on potassium : develop an intergovernmental mechanism to summarize knowledge about the substance and to agree on prices for potash fertilizers.

It turns out that everything is different: what scientific stereotypes have AI dispelled?

0

Artificial intelligence has deeply penetrated many scientific fields – proteins for drug development are created in a matter of seconds , thousands of new galaxies are found in one fell swoop , droughts and floods are predicted long ago. Although AI cannot replace a scientist, technology can definitely change our understanding of phenomena that are obvious at first glance.

Not only living organisms are capable of recognizing images

Vision is one of the main channels of information perception for people and animals. The human eye can recognize thousands of shades and textures, representatives of the cat family are able to see in the dark, and even a deep-sea squid navigates the ocean floor in the darkness with the help of its eyes. ‎

In addition to the ability to “see” this or that object, the ability to recognize it is very important. Dogs, for example, can distinguish people and other animals from members of their own species, even on television screens. Until 2012, no one in the scientific community believed that a computer could be taught to do the same thing.

That year, at one of the key conferences on the topic of AI, NeurIPS, a group of scientists presented a neural network that learned to recognize pictures well. It was named after the first author, Alexander Krizhevsky, “Alex Network,”  that is, AlexNet.

The researchers trained a large deep convolutional neural network to classify 1.3 million high-resolution images. The pictures were taken from the giant ImageNet database of described images. Largely thanks to this work, the topic of computer vision began to develop so actively that in 2024 we will be able to unlock phones with our faces.

By the way, ten years later the same thing happened with the understanding of texts. ChatGPT was born, bringing complex models closer to end users, and the world became interested in the nature of AI technologies.

Logic is not only subject to humans

For a long time, mathematical problems were solved by computer systems using human-specified conditions—akin to a calculator. Nobody believed that complex logical conclusions were available to them. For example, solving geometric problems or searching for proofs of theorems. Asterisk problems were an integral part of the search for young mathematical talent. For this purpose, an international mathematical Olympiad was organized, during which schoolchildren competed for years to find answers to the most ornate problems. No other system could boast the same results.

In January 2024, a team of Google researchers introduced AlphaGeometry. A real breakthrough. This artificial intelligence system solves complex geometric problems at a level approaching that of a human Olympic gold medalist. You can read a publication about the program in the journal Nature .

In a comparison test, out of 30 geometry Olympiad problems, AlphaGeometry solved 25 within the standard Olympiad time. For comparison, the previous modern system solved only ten such geometry problems, and the average gold medalist solved 25.9 problems.

Climate does not always affect the size of plant leaves

There is a rule in botany: in a humid environment like a jungle, a plant’s leaves will be much larger than those of the same plant in a dry climate, like a desert. In other words, temperature and the amount of precipitation necessarily affect the size of the leaves.

It turned out that this rule only works within genera, but not within species. Maples in general throughout the planet will actually have more leaves at the equator and fewer at the poles. But in Norway maples, size rather depends on the exchange of genes with other populations. Australian researchers found this out a year ago: using computer vision, they selected and analyzed several thousand specimens from the National Herbarium of New South Wales.

This discovery is useful not only for agriculture – it can form a new perspective on the evolution and adaptation of plants, and preserve rare species even taking into account climate change.

The extinction of species is not followed by their growth

We know about global warming, and cataclysms such as meteorite falls and volcanic eruptions periodically occur on Earth – as a result of all this, over tens of thousands of years, biodiversity can decrease by 95%. At least five mass extinctions are known – and a sixth, man-made, is happening right now due to hunting, deforestation and environmental pollution.

Biologists believed that after each such extinction a boom occurs – just as after dinosaurs the planet was populated by mammals, so in other cases empty ecological niches are quickly filled by new species.

In 2020, this important evolutionary theory had to be revised: using machine learning, British and American scientists proved that there is no connection between extinction and reproduction. To do this, the program analyzed more than a million descriptions of fossils of 170 thousand species. The article did not become particularly cited in its subject area, but was still liked by fans of popular science news.

Fingerprints aren’t that unique

Using fingerprints, people log into banking applications, open doors, and identify criminals. And everyone is convinced that they cannot coincide – even if you compare, say, the index fingers from both hands of one person. And that it is impossible to say for sure whether a set of prints belongs to the same person.

Most likely, this approach will have to be reconsidered: there is no evidence that all prints are truly unique. On the contrary, in January, the opposite data appeared – “intrahuman” prints are extremely similar, especially for paired fingers.

Forensic scientists did not notice this because they compared the length of the lines and the places where they branched, and the AI ​​found a new marker – the curvature and inclination of the curls. In other words, visually the prints may be completely different, but in fact have the same “owner”.

Scientists from Columbia University found this out. They trained the AI ​​model on half a million artificial and 60 thousand real fingerprints – several per person. In most cases, the system was able to correctly match different fingerprints of the same people.

From a technology point of view, the AI ​​model used is quite simple and unremarkable. But the results obtained thanks to it can help increase the efficiency of forensic medical examination by almost two orders of magnitude.

And this, in turn, will simplify the search for connections between crimes and speed up the exclusion of innocent people from judicial actions. Owners of gadgets will only need to scan one finger and then open the device with any of them.

What’s next?

These are just a few examples of how AI helps scientists refute axioms – and ultimately improve life and move towards new scientific discoveries.

How else can researchers use technology to their advantage? First, to find patterns in big data. Several thousand analytical articles alone are published every day – a person cannot comprehend so much information. Secondly, to formulate specific hypotheses – “this vaccine will work on a person like this”, “this cell will react to this irritant like this”.

AI models are not yet ready for autonomous operation; the systems are developing, but so far they have many prejudices borrowed from people. However, the situation will probably change in the coming years, and sometime in the 2050s, scientists may receive a Nobel Prize for answering the question about the origin of the universe, obtained together with AI.

AI Trends: Should Data Be Considered a Product?

0

Historically, businesses have had two approaches to leveraging data.  

Some companies took a local approach, where individual people and teams leveraged their own databases to obtain information based on their needs. Other companies, usually larger ones, have followed a more “big-bang” method, in which the company formed a specialized team to aggregate, prepare and share data. 

What are the problems posed by these approaches? Neither are sustainable. Neither can handle the amount of data coming in from customers and internal operations. Neither allows customers to profit from their own data.

But there is now a third option, according to McKinsey: making data an internal product.

Using data as an internal product provides a competitive advantage to companies like Netflix, Intuit, PayPal and many others.  

Should you follow suit? A diverse group of industry experts answered this question during a recent roundtable discussion . 

Panelists Sejal Amin, Chief Technology Officer at Shutterstock, Jana Eggers, CEO at Nara Logics, Razat Gaurav, CEO at Planview, and Dr. Rich Sonnenblick, Chief Data Scientist at Planview, were joined by moderator Ray Wang, Principal Analyst and founder of Constellation Research, Inc.

This blog post presents their views on using data as a product, including: 

  • The shift in how businesses view customer data 
  • How Data Productization Benefits Businesses and Customers 
  • The importance of ethical data sourcing and gray areas to watch out for  

Listen to the full discussion: AI in the Enterprise: Opportunities, Challenges and the Future 

A change in culture – and mindset – regarding customer data 

Moving company data to cloud computing had many benefits for customers: easier access to data, better security and greater organization. This shift has also led to a massive shift in how businesses perceive, use and manage data. 

At the start of the cloud computing craze, companies like Planview began storing their customers’ data. Many cloud computing companies have approached customer data like a storage unit: you keep it, but you don’t touch it.  

When companies didn’t care about data, developers, product teams, and customer support teams got used to never interacting with data without permission. All data access requires explicit authorization for specific actions.

But the culture has changed. The relationship between businesses and customers, as well as the relationship between teams within a business, has changed, primarily because there is more data available in real time.  

Rather than being a storage unit service, cloud computing companies today operate more like banks – constantly using data to make better decisions for the business and its customers. 

Yes, businesses store data, but they constantly use it, learn from it, and put it to use. Treating data as a product means that the data has its own processes, is customized for different uses, has support (including documentation and interfaces) for different downstream needs, and is are governed by clear rules – again, like in a bank.

Using data as a product benefits both businesses and customers 

There is value in leveraging data to improve products, design features, provide AI/ML capabilities to train models in real-time, and make changes for downstream customer use. Data is essential for analysis and optimization. 

Despite these many benefits, using data as a product and viewing internal teams as customers are new mindsets for many companies. The larger the company, the greater the effort required to create these data products in a systematic, repeatable, and efficient manner.  

By looking at data more broadly, businesses can see opportunities to use data for purposes other than internally. 

If a company has a goldmine of customer data, this may be just the opportunity to increase the value of its existing assets.  

Customers often do their own data science and want to access their own data efficiently. Data as a product is another way for businesses to serve their customers, allowing them to access their own data more efficiently and improve their own processes. 

This is a win-win situation based on information that already exists and is just waiting to be used to its full potential. 

To achieve additional gains on the data front, companies are considering how to scale their offerings in a more sustainable way, requiring less human intervention and more automation and visibility for customers looking for different types of content. 

It’s clear that defining data as a product has internal benefits for a business. However, the same interpretation can also highlight a wide range of values ​​for customers. 

Ethical data sourcing could not be more important 

Shutterstock is a great example of using data as a product. Sejal shared Shutterstock’s experience with ethical data, as well as industry concerns around treating data as a commodity.  

Shutterstock has made several major deals with Amazon, Google, Meta and other major companies. Shutterstock sold them its library or subsets of its image library. In some cases, companies came back and requested a different type of asset or more of the same type of asset.  

As businesses consume more and more data, Shutterstock was uniquely prepared with its ethically sourced assets and corresponding metadata, of which it already had a massive amount, both automatically and manually, through intermediary of its contributors.  

This is exactly what hypermarkets need to develop their offers. And all this at a time when the conversation around legal and ethical data was intensifying, such that Shutterstock’s ethical data was a competitive advantage. 

However, there are gray areas. When selling data as a product, Sejal encourages companies to ask tough questions, such as:  

  • What do you give?  
  • How many do you give? 
  • Is this the right thing to do right now?  
  • What is the ultimate goal of what you are trying to do, and how are you trying to do it?  
  • What type of content are you looking for? What type of metadata are you looking for? 

Finding these answers will help businesses make ethical data sourcing their North Star. 

Conclusion 

Should we therefore consider data as a product? This question is timely and the rise of AI capabilities is accelerating the need to answer it.  

This approach offers significant opportunities not only for your business but also for your customers. Ethical data sourcing fosters a relationship with data that customers and stakeholders can trust and value. 

Learn about other implications of AI in today’s enterprise – including the impact on knowledge workers, responsible AI and what organizations can do today – in AI in the Enterprise: Opportunities, challenges and the future . 

Cryptoboom: who and how makes money from computer money

0

On March 5, 2024, Bitcoin set a new all-time high of $68.7 thousand. In an episode of the podcast “What Has Changed?” together with experts, we discussed why blockchain is safe and how to look for work in the field of digital money

Guests of the issue:

  • Sergey Romantsev – podcast host, tech blogger,
  • Sergey Khitrov is the founder of the world’s largest listing agency, Listing. Help, the blockchain forum Blockchain Life, and the investment fund Jets Capital.
  • Sergey Romanov is the creator of Red Company, the TON blockchain ecosystem .

Topics of conversation:

1:25 – in simple words: how money gets into the computer

4:39 – Blockchain – a security standard?

8:21 — cryptocurrency as a national currency: why many are afraid of it

11:59 — how crypto transactions are carried out

19:24 – not only mining: how else to earn cryptocurrency

29:00 — is it worth buying cryptocurrency now?

Cryptocurrency and blockchain: what is it?

Sergey Khitrov is sure that cryptocurrency is an evolution of ordinary money; it should be treated as another way to pay for purchases or earn money. Sergey Romanov added that the distinctive feature of cryptocurrency is that it is stored in the blockchain – a decentralized system, the chains of which are scattered throughout the world. To conduct transactions with such currency, you just need to connect to your wallet from any device with Internet access.

Blockchain is a fairly secure system, says Romanov. It becomes vulnerable only if the code is poorly written – then hackers have more opportunities to hack. The safest way to store digital money, the expert explains, is a wallet protected by a code of 12 or 24 random words.

How to mine cryptocurrency and how to make money on it

Khitrov said that mining takes only 3% of cryptocurrency earnings. In general, there are two ways to get digital money.

  1. Proof-of-Work – mining using the computing power of a computer or other equipment.
  2. Proof-of-Stake – earnings on a percentage of the funds in the account.

Khitrov added that the cryptocurrency sector should be considered as a potential place of work. Now, according to experts, there are already more than a hundred thousand cryptocurrencies in the world: all of these are separate projects that need both technical and humanitarian specialists.

Why cryptocurrency is viewed with distrust

Experts explained that states are wary of cryptocurrency because they need to control cash flow, which allows them to calculate taxes to replenish the state treasury. The decentralized organization of the blockchain does not allow this. Nevertheless, Khitrov says, El Salvador, which relatively recently made cryptocurrency public money, managed to increase the number of tourists and earn money from investments in this area.

ITIL Certification: Where to Start?

0

With over 2 million certified professionals in 2024, ITIL is the world’s most popular IT service management methodology.

It is a set of best practices relating to ITSM (Information Technology System Management) which serves as a guide for companies undergoing digital transformation.

It allows them to manage IT services efficiently, by the intrinsic needs of the commercial structure to achieve its objectives.

In this article, you will learn what ITIL is, its importance for managing an IT service and the different levels of ITIL certification. You will also discover tips on the best way to prepare for the certification exam and, above all, how to pass it successfully.

Understanding ITIL and its Importance

To fully understand what ITIL is, it is necessary to know the history of this framework and the changes it has undergone over time.

History and evolution of ITIL.

ITIL was born at the end of the 1980s, from the reflections of the CCTA (Central Computer and Telecommunications Agency).

This British government agency was contacted by the Government of Great Britain to improve the quality of public IT services to which customers were complaining greatly.

To this end, the CCTA has produced a collection of recommendations to follow to resolve the problem. This set of 30 documents was so appreciated for its relevance that it was established as a standard of good  IT service management practices by professionals in the field. ITIL has undergone several evolutions, moving to versions V2, V3, and V4 respectively from 1999 to 2004, in 2007 than in 2019.

Main concepts and principles of ITIL

ITIL V4 certification is   oriented around the concept of “Service Value System” and contains 5 major elements:

  • the service value chain,
  • the 34 ITIL practices defined in V3,
  • the 7 guiding principles,
  • ITIL governance,
  • the amelioration keeps going.

Regarding the service value chain , it is based on 6 activities that can be combined in different ways to obtain different processes. These activities are planning, improvement, engagement, design, construction and delivery.

When it comes to the 7 guiding principles of ITIL V4, it’s first about focusing on value. Then it’s about starting where your business is, iterating with feedback, collaborating and promoting visibility.

The last 3 principles state that you must think and work with a holistic approach, focus on simplicity and remain practical, then optimize and automate.

ITIL V4 also includes 4 dimensions to consider when managing IT services, namely organizations and people,  information and technology , partners and suppliers, value flows and processes.

Benefits of ITIL Certification for Professionals and Organizations

Having an ITIL certification allows IT professionals to acquire highly sought-after skills in IT service management. This approach can allow them to increase their value on the job market and access positions of responsibility, which equates to better remuneration.

For businesses,  ITIL certification  will allow them to standardize internal processes for providing IT services while remaining very flexible. At the end of this training, they will be able to perfectly meet the needs of customers on a global scale and will be more competitive.

The different levels of ITIL certification

There are 5 levels of ITIL certification: Foundation, Practitioner, Intermediate, Expert, Master.

Description of certification levels

The Foundation level is the initial level of ITIL certification. Taking training at this level is ideal for  IT professionals  who want to know the fundamentals of ITIL. The training will also allow them to know how the use of the repository can be used to significantly improve IT service management.

Practice is the second level of ITIL certification. Its objective is to allow people certified on Foundation to apply the framework in their daily work environment. The content of the Practitioner level training covers a range of elements allowing continuous  improvement of business performance  such as DevOps, Agile or Lean methodologies.

Concerning the third level of certification called “Intermediate”, it is available in two categories: Lifecycle and Capability. The first refers to the life cycle of IT services, while the second concerns the operational aspect of management, as well as  responsiveness to incidents . These two categories contain 5 and 4 training modules respectively. You can choose between Lifecycle and Capability or follow both courses.

For people with in-depth knowledge of ITIL and its best practices, the Expert level is recommended. This certification proves that the person who holds it can apply the ITIL approach in its entirety within a company. The Master level is rather reserved for professionals with  ITIL certification at Expert level , with at least 5 years of experience in a position in line with the standards.

Tips for Choosing the Right Level

To choose the appropriate certification level, first consider your level of knowledge of ITIL practices. Also take into account the ITIL certifications you already hold and the prerequisites you need to meet to obtain those that interest you. Also make sure that the targeted  level of certification  is consistent with your professional objectives, as well as the needs of the organization where you practice.

Preparing for ITIL Certification and taking the Exam

To obtain ITIL certification, it is essential to prepare well. This first requires having the appropriate study materials. In this context, acquiring the  official content of the ITIL V4 documentation  is highly recommended. By reading it, you prepare your mind to correctly assimilate the information related to the training you will follow on this subject.

The next step consists of following official training, provided by structures accredited by AXELOS which awards ITIL certification. You benefit from excellent quality teaching provided by experts in the field, which gives you the IT service management skills you are looking for. With some of these organizations, the training even includes the cost of the certification exam. You should budget between  on average.

 You can also strengthen your capabilities by attending ITIL certification workshops  for better preparation. The daily application of the practices acquired during ITIL training in your professional life is also strongly recommended. Add to that practice exams to practice. You can even use Lemon Learning’s learning solutions during your preparation, to more easily review certain concepts and better understand ITIL.

When you are ready, register with an approved center to take the certification exam  by paying the associated fees worth between .

Then continue to review the various practices until the fateful day, then restore all your knowledge. Make sure you approach the certification exam calmly and in a relaxed manner. This is necessary so that you can answer the questions to the best of your ability.

​After Certification — Opportunities and Advancement

When you obtain  ITIL certification , new horizons open up to you when it comes to your professional career. You can access positions as DSI (Director of Information Services), project manager, technical service manager, etc. You can even consider setting up your own business and becoming an ITIL consultant.

ITIL certification also opens up the job market to you in any part of the world, knowing that it is  internationally recognized . After obtaining it, however, it is necessary to continue learning to pass the higher levels of certification. This also allows you to be informed of the latest trends relating to IT service management. You can even obtain certifications complementary to ITIL such as ISO 20000 or Lean IT, which expand your range of skills.

Conclusion

ITIL is a very useful approach to managing   an organization’s IT services more effectively. Obtaining an ITIL certification makes a person more competent in managing these systems and provides them with tremendous career prospects.

Companies certified on this benchmark will instead benefit from an improvement: in the quality of service offered, in the customer satisfaction rate, and in better competitiveness. ITIL certification is available in several levels to choose from according to your needs and it is essential to prepare well to pass the exam.

What will the Google Gemini artificial intelligence tool change?

Artificial intelligence systems that can access multiple data types or information sources simultaneously and interact between these different data types are defined as multimodal generative artificial intelligence.

 While traditional AI models generally focus on a single type of data, multimodal generative AI can integrate text, images, audio, and other types of data.

 Multi-modal generative AI can thus provide richer and more comprehensive solutions in real-world applications.

  Google parent company Alphabet announced Gemini 1.0, a multi-modal AI-based large language model (LLM) with language, voice, code and video understanding capabilities, on December 6, 2023. Surpassing GPT4 in most values, Gemini is the most advanced large language model of today.

Gemini consists of 3 versions

Gemini
Gemini

Introduced in three versions: Ultra, Pro and Nano, each model of Gemini is designed for different usage scenarios. The top-of-the-line model, Ultra, is being developed for extremely complex tasks. Gemini Ultra is targeted to be released in early 2024. 

The Gemini Pro version is designed for performance and deployment at scale. Google has enabled access to Gemini Pro on Google Cloud Vertex AI and Google AI Studio as of December 13, 2023.

 For coding, a special version of Gemini Pro is preferred to power Google AlphaCode 2 generative artificial intelligence coding technology.

The Gemini Nano version also targets on-device use cases. Gemini Nano; It has two different versions: Nano-1 with 1.8 billion parameters and Nano-2 with 3.25 billion parameters. Among the devices where Nano is used is the Google Pixel 8 Pro smartphone.

What abilities does Gemini have?

abilities
Gemini

Google’s new artificial intelligence solution Gemini; It offers the capacity to perform tasks in multiple methods, including text, images, audio and video. 

Gemini’s multimodal nature also enables different methods to be combined to understand and produce an output. This allows it to use its capabilities much more comprehensively, even though it has similar capabilities to platforms such as GPT.

  • text summarization 

Gemini offers the opportunity to summarize content by bringing together content from different data types.

  • Text production

Gemini can generate text based on user prompts. The text generation process is driven by a question-and-answer type chatbot interface.

  • text translations

Gemini comes with extensive multi-language capabilities that enable understanding and translation of more than 100 languages.

  • Code analysis and generation

Gemini can understand, explain and generate code in popular programming languages, including Python, Java, C++ and Go.

  • Understanding the image

Google Gemini can understand image-based content. Gemini, which can parse complex visuals such as graphs, shapes and diagrams, can perform tasks such as creating captions for the image.

  • audio processing

Gemini offers recognition and voice translation support in more than 100 languages, just like text content.

  • understand the video

Gemini can process and understand video clip content to answer questions and create explanations.

  • Multimodal reasoning

Gemini can perform multimodal reasoning by mixing different types of data to create an output. This feature is Gemini’s most important talent.

Gemini’s present and near future

future

Developed by Google as a base model and widely integrated into various Google services, Gemini also supports developers’ applications. Currently, Gemini’s capabilities are used in Google Bard , Google AlphaCode2, Google Pixel, Android 14, Vertex AI and Google AI Studio. Google is also testing Gemini in generative AI-powered search to reduce latency and improve quality.

Although Pro and Nano versions of Gemini are currently available, the real big step of this multi-modal artificial intelligence will be taken with the Ultra model. Google says this model will be rolled out to select customers, developers, partners, and experts for early trials and feedback before being fully rolled out to developers and businesses in early 2024. 

Gemini Ultra is also thought to form the basis for Bard Advanced, an updated, more powerful and capable version of the Google Bard chatbot. If the process progresses positively for Gemini, this multi-modal productive artificial intelligence is planned to be integrated into the Google Chrome browser in the not-too-distant future.