Hubble helps uncover origin of Neptune’s smallest moon Hippocamp

0

Astronomers using the NASA/ESA Hubble Space Telescope, along with older data from the Voyager 2 probe, have revealed more about the origin of Neptune’s smallest moon. The moon, which was discovered in 2013 and has now received the official name Hippocamp, is believed to be a fragment of its larger neighbour Proteus.

A team of astronomers, led by Mark Showalter of the SETI Institute, have used the NASA/ESA Hubble Space Telescope to study the origin of the smallest known moon orbiting the planet Neptune, discovered in 2013.

“The first thing we realised was that you wouldn’t expect to find such a tiny moon right next to Neptune’s biggest inner moon,” said Mark Showalter. The tiny moon, with an estimated diameter of only about 34 km, was named Hippocamp and is likely to be a fragment from Proteus, Neptune’s second-largest moon and the outermost of the inner moons. Hippocamp, formerly known as S/2004 N 1, is named after the sea creatures of the same name from Greek and Roman mythology [1].

A moon also was stolen from Kuiper Belt

The orbits of Proteus and its tiny neighbour are incredibly close, at only 12 000 km apart. Ordinarily, if two satellites of such different sizes coexisted in such close proximity, either the larger would have kicked the smaller out of orbit or the smaller would crash into the larger one.

Instead, it appears that billions of years ago a comet collision chipped off a chunk of Proteus. Images from the Voyager 2 probe from 1989 show a large impact crater on Proteus, almost large enough to have shattered the moon. “In 1989, we thought the crater was the end of the story,” said Showalter. “With Hubble, now we know that a little piece of Proteus got left behind and we see it today as Hippocamp.”

Hippocamp is only the most recent result of the turbulent and violent history of Neptune’s satellite system. Proteus itself formed billions of years ago after a cataclysmic event involving Neptune’s satellites. The planet captured an enormous body from the Kuiper belt, now known to be Neptune’s largest moon, Triton. The sudden presence of such a massive object in orbit tore apart all the other satellites in orbit at that time. The debris from shattered moons re-coalesced into the second generation of natural satellites that we see today.

Later bombardment by comets led to the birth of Hippocamp, which can therefore be considered a third-generation satellite. “Based on estimates of comet populations, we know that other moons in the outer Solar System have been hit by comets, smashed apart, and re-accreted multiple times,” noted Jack Lissauer of NASA’s Ames Research Center, California, USA, a coauthor of the new research. “This pair of satellites provides a dramatic illustration that moons are sometimes broken apart by comets.”

Notes

[1] The mythological Hippocampus possesses the upper body of a horse and the lower body of a fish. The Roman god Neptune would drive a sea-chariot pulled by Hippocampi. The name Hippocamp was approved by the International Astronomical Union (IAU). The rules of the International Astronomical Union require that the moons of Neptune are named after Greek and Roman mythology of the undersea world.

Audio, Image, Video Compressions, for Non-Informaticians

Choosing the right codec for every project takes a lot of time. Not everybody is an informatician. My goal here is to reduce the amount of time that you need to spend in order to understand the simple and relevant compression ways.

Sometimes compression process causes some loss of information and sometimes not. In order to save some space, we trade-off some data by throwing it away. Lowering the resolution is one way of doing it. But first of all how does data compression work?

Representation of an Image

Representing digital image can take a lot of space. For example, if you want to store 4-minute song digitally, it would take up over 40 megabytes of space. 2-hour HD video would take 1000 gigabytes. To take up less space, in the real world the information is digitally compressed. The 40-megabyte song can be compressed to 4-megabyte, 2-hour video can be reduced to just 2-gigabyte etc.

But are we enough aware of our options when it comes to the compressions?

Here are some tips to lose less or not to lose any data.

Lose Data Compression Ways

  1. 1. Spatial compression: Identical pixels are not represented. Instead of coding the same pixels separately one selected pixel represents all. JPEG, MGPG, AVCINTRA, ALL-INTRA, Animated GIF are just some examples of lost information compression.
  2. 2. In Audio: FLAC and MP3
  3. 3. Temporal Compression: To reduce the size of a storage keyframes either are left uncompressed or only differences between frame are compressed. MPEG-1, MPEG-2, MPEG-4
  4. 4. Color Sampling (In French échantillonnage): Some parts of the colour is only registered.
samsungdisplay
colour sampling ways for VHS, some other cameras

Lossless Data Compression

It means that: the compressed data can be decompressed back into the exact original. Every single parts of data that was originally in the file, remains the way it was after it is uncompressed.

Png, GIF, ZIP, Motion JPEG 2000, VC-2 HQ, AV1, Apple Animation, FFmpeg, CorePGN are some of the examples of this type of compression. For more in detail: https://en.wikipedia.org/wiki/Lossless_compression 

My Suggestions

For your photographs you can use also psd, bmp or TIFF. In TIFF you can be sure that no data is being lost in the compression.

For audio: WAV is entirely uncompressed, but it takes a lot of space.

Video: If you want no image quality loss for video then HEVC (H.265) and MRP are the ones that I can suggest.

Don’t Afraid to Lose Data

It looks frightening at first, thinking about how much compromises you need to initiate. Using a lossy compression gives you more space in your data storage. But lowering down the quality forces you to accept the imperfection. However, sometimes lossy compression options for images can be a smart move to do. If the image is as good as to human eyes, another way to say, if visual quality is just about the same, why should we care about the technical loss?

On the other hand sometimes a highly compressed image from the beginning, especially in video, doesn’t allow for the necessary touches after. Such as low-quality footage in the first place makes the colour correction or contrast arrangements difficult afterward. That’s why for the video, it is a good idea to shoot your rushes (footage) in high quality. That will allow you to keep as much detail as possible from the start. But once you finish the editing and arrive to post production you can compress it by lossy compression. It is perfectly fine. You will see it is not going to make a very big difference.

Otherwise for audio storage, I suggest you do the opposite. Higher the quality better the benefit. Because adding changings on a sound wave afterward, won’t give the same calibre. Sound is not as suitable as the image files in terms of compression. When you lose the quality it is completely gone. That’s why for me, you should consider keeping the audio files as lossless as possible in your archive.

Analog vs. Digital recording? Which one is superior?

Both Digital or Analog recordings transform sound signals into an electrical signal. While performing that, analog use different method to replicate the original sound waves than digital. They both try to be close to the original sound as much as possible. But which recording is superior to the other one? Analog or Digital? That’s a difficult question to answer.

In AUDIO

Which one way is a better way to represent the true essence of a sound? Digital or Analog?

Vinyl and cassette tapes (magnetized tape) are examples of analog mediums for recording. The analog mediums record the sound on an imprinted surface which has ups and downs. That’s why when we listen to vinyl, it has this tiny imperfection sound comes which cause cracking and popping noise.

Digital system, on the other hand, exhibit cleaner audio by modulating the sound wave in a discrete method. Block of information arrives in a form of 1 or 0 coding system. The needle vibration sound problem is no longer there. However, because the sound hits in a composite of multiple layers (snapshots of a sound per second), it translates the information by bits and pieces.

In the binary system of 1 and 0 of digital transformation, we lose some of the original sound waves. Another way to say, digital system reproduces better the original audio but fails when it comes to having the closest essence of a sound that analog produces.

According to audiophiles, the purest sound originates from a vinyl recording, played on an analog sound system is the true essence of the sound. With today’s technology if you really want a high-quality sound system you need speakers, amplifiers, headsets, mixers etc. That will cost you definitely a fortune.

In VIDEO

Which image quality is better in cameras? Digital or Analog?

When it comes to visual image transmission, Analog vs. Digital competition comes back into play.

The most serious disadvantage of analog signals in imagery compared to the sound is that analog image transmissions always contain noise. It means that when the image is copied or processed the ‘noise’ reproduces for the next generations. Charges from the pixels must be converted first to a voltage. This is done with a capacitor circuit. Then the voltage levels must be measured and converted to a number. This is done with the help of a converter. It is called analog to digital converter (A/D).

A key advantage of a digital image, versus an analog image is the ability to make unlimited copies without any loss of image quality.

Both CCD (Charge-coupled Device) or CMOS(Complementary Metal-oxide Semiconductor) camera sensors convert light into electrons.

From 1960’s one colour charge band, CCD has become a major technology for digital imaging. CCD sensors are analog components that require more electronic circuitry outside the sensor. CCD’s are more expensive to produce and can consume up to 100 times more power than CMOS sensors. CCDs tend to be used in cameras that focus on high-quality images with lots of pixels and excellent light sensitivity.

Analog CCD vs. Digital CMOS ?

CMOS on the other hand, are produced in Silicon Valley. They are usually less expensive and have great battery life. The CMOS imager has a completely digital output. However, their sensors traditionally have lower quality, lower resolution, and lower sensitivity.

Hence, each CCD amplifier has higher bandwidth, which results in higher noise. Consequently, high-speed CMOS imagers can be designed to have much lower noise than high-speed CCDs.

What about Tri-CCD vs Tri-CMOS ?

Today, the technology is advanced. Therefore, if we compare the cameras with only one colour charge we see that it is replaced by three colour charge. But what does it mean?

Cameras generally provide image quality based on how many optical colour separations they exercise. Today’s cameras are using three colour charges. Those are the ones, we call main colours: which is Red, Green and Blue bands.

Conclusion

As a result, using digital cameras are a necessity more than a choice in terms of what they can provide. Such as more advanced features (Test Image, Stamp, Frame Counter, I/O Port Status, Error Checking, Partial Scan, Image Flip etc.) and higher resolution is not possible with analog cameras. However, using analog sound registration technologies is still a personal choice rather than a necessity.

Hubble sees the brightest quasar in the early Universe

Quasars are the extremely bright nuclei of active galaxies. The powerful glow of a quasar is created by a supermassive black hole which is surrounded by an accretion disc. Gas falling toward the black hole releases incredible amounts of energy, which can be observed over all wavelengths.

The newly discovered quasar, catalogued as J043947.08+163415.7, is no exception to this; its brightness is equivalent to about 600 trillion Suns and the supermassive black hole powering it is several hundred million times as massive as our Sun. “That’s something we have been looking for for a long time,” said lead author Xiaohui Fan (University of Arizona, USA). “We don’t expect to find many quasars brighter than that in the whole observable Universe!”

Despite its brightness Hubble was able to spot it only because its appearance was strongly affected by strong gravitational lensing. A dim galaxy is located right between the quasar and Earth, bending the light from the quasar and making it appear three times as large and 50 times as bright as it would be without the effect of gravitational lensing. Even still, the lens and the lensed quasar are extremely compact and unresolved in images from optical ground-based telescopes. Only Hubble’s sharp vision allowed it to resolve the system.

The data show not only that the supermassive black hole is accreting matter at an extremely high rate but also that the quasar may be producing up to 10 000 stars per year. “Its properties and its distance make it a prime candidate to investigate the evolution of distant quasars and the role supermassive black holes in their centres had on star formation,” explains co-author Fabian Walter (Max Planck Institute for Astronomy, Germany), illustrating why this discovery is so important.

Quasars similar to J043947.08+163415.7 existed during the period of reionisation of the young Universe, when radiation from young galaxies and quasars reheated the obscuring hydrogen that had cooled off just 400 000 years after the Big Bang; the Universe reverted from being neutral to once again being an ionised plasma. However, it is still not known for certain which objects provided the reionising photons. Energetic objects such as this newly discovered quasar could help to solve this mystery.

For that reason the team is gathering as much data on J043947.08+163415.7 as possible. Currently they are analysing a detailed 20-hour spectrum from the European Southern Observatory’s Very Large Telescope, which will allow them to identify the chemical composition and temperatures of intergalactic gas in the early Universe. The team is also using the Atacama Large Millimeter/submillimeter Array, and hopes to also observe the quasar with the upcoming NASA/ESA/CSA James Webb Space Telescope. With these telescopes they will be able to look in the vicinity of the supermassive black hole and directly measure the influence of its gravity on the surrounding gas and star formation.

Apollo 8 Astronaut: Sending People to Mars Would Be Stupid

Bill Anders, who is famous for the “Earthrise” photo, which is the first ever picture of Earth from the orbit of the Moon taken by men, criticised Elon Musk and Jeff Bezos on their highly ambitious space exploration projects. Anders also believes NASA is not ready to go back to the Moon and there’s not enough public support for it.

Bill Anders, who reached the orbit of the Moon 50 years ago and took the photo of Earth’s rising over our satellite orbit, has made serious criticism of space exploration projects of private aerospace firms as well as NASA.

Talking to BBC Radio 5 Live for the new documentary about the Apollo 8 mission, Anders said that “it is not actually fun to make space travel and not so exciting to go to the Moon.” While talking about how hard space travel is, Anders argued that NASA should be spending its budget on other things instead of planning a manned mission to Mars.

“What’s the imperative? What’s pushing us to go to Mars? I don’t think the public is that interested,” said Anders, adding that the public isn’t even interested in going to the Moon, which is true. According to a Pew poll in 2018, only 18 per cent of US citizens think going to Mars is important, while 45 per cent say that it is a lower priority. 37 per cent of Americans, on the other hand, think it is unnecessary to go to the Red Planet, as Elon Musk is committed to making it real.

You would think that Mars is bombarded with high radiation of the Sun and too far away and public’ opinion is not surprising. But general view towards going back to the Moon is almost the same as Mars for Americans. Only 13 per cent say that it is imperative, while 42 per cent think it’s lower priority. For 44 per cent, it is not important.

“Mars colony is nonsense”

What Anderson underlined in terms of space exploration is very important because many people support the idea of “expanding the lifespan of Earth” instead of colonizing the Moon or Mars. Some people support the idea that space agencies should be spending their budget to help the environment more than building spacecraft to explore deep space. Some scientists, on the other hand, like NASA’s Umut Yıldız, think that the budget of the weapons industry should be downsized and allocated for environmental projects.

Anders, is supporting the unmanned Mars missions in this concept but thinks going there is just a fantasy. Plans of Elon Musk’ SpaceX and Jeff Bezos’ Blue Origin are hard to believe for him. “There’s a lot of hype about Mars that is nonsense,” said Borman. “Musk and Bezos, they’re talking about putting colonies on Mars, that’s nonsense” he says.

Ilustration of SpaceX’ Starship on Mars colony. [SpaceX]

Commander of Apollo 8 mission, Frank Borman supported Anders on his thought about deep space missions. According to him, you will just puke all the way getting there to only see a “devastation”, “meteor craters” and “no color at all”, just different shades of grey.

Reminding us that the Apollo missions took place in an era of extreme competition against the Soviets, Anders believes NASA couldn’t go to the Moon today. “They’re so ossified… many of the centres are mainly interested in keeping busy, and you don’t see the public support.”

Depite criticisms from Borman, Anders remained supportive of NASA and said “we need robust exploration of the Solar System and man is a part of that.”

It is hard to know what Elon Musk would think and if NASA would re-evaluate its robotic missions to the Moon after Anders’ comments. But space exploration is already a huge industry and it can not be limited when there are so many ambitious entrepreneurs like Musk, Bezos and Virgin Galactic CEO Richard Branson.

Sources

IFLS, Sciencealert, DijitalX

A Little Step Trough To Tesla’s Dream of Free Energy: Plancx

It is becoming evident that we frequently need to get electricity quicker, cheaper, and easier in today’s technological world day by day, and electricity is the lifeblood of modern society. Therefore, the action was taken in consideration of the production of a Plancx solar charger that is designed to meet the power of our devices, which are being used to improve our business and social relationships or to have some fun at the beginning of this year.

In this respect, we tried to find out the best solar panel in the market, and the lamination makes the panel more durable and efficient during the research and development phase. These two main requirements should also be met in production under the cheapest and most high-quality conditions.

As a result of research, we decided to move on the production with a panel, the most productive and serviceable one, which includes Maxeon™ Gen III Solar cells produced by Sunpower, a corporation of The United States, and a lamination helping to increase efficiency and durability of the panel is Ethylene Tetrafluoroethylene(ETFE).

In themeanwhile, we tried to reach a remarkable design that provides it more portableand can be got in every moment in our lives very easily by using these two bestproducts.

Finally, three Plancx solar chargers have been designed for different purposes of use. These are;

Plancx City can generate 7 watts/1.4 amperes and is designed for easy carry with its tiny sizes in city life, especially for women’s bags.

Plancx Walk can generate 7 watts/1.4 amperes and is designed as a monoblock for outdoor activities like camp and beach usage with a bag on its backside to put any device in it.

PlancxRoad can generate 12 watts/2.4 amperes and is designed for quicker charging. Usage tests we performed can be seen in the attached video.

These numbers given by producer related with power are result of technical tests. Amperes and watts may vary according to the amount of sunlight received.

The world’s most handy solar panels are launching on Indiegogo on February 01st, 2019.

FACTS

City: Weight: 0.150 kg(0.33 lbs.)            

Folded:16.5 cm x 15.5 cm x 0.1 cm (6.5 in x 6.1 in x 0.04 in )

Unfolded: 16.5 cm x 31.5 cm x 0.3 cm (6.5 in x 12.4 in x 0.12 in )

Power capacity:  7 watts / 5 volts / 1.4 amperes

Power capacity:  7 watts / 5 volts / 1.4 amperes

Walk: Weight: 0.217 kg (0.48 lbs.)            

Dimensions:17 cm x 31 cm x 0.5 cm (6,7 in x 12.2 in x 0.2 in )

Power capacity:  7watts / 5 volts / 1.4 amperes

Road: Weight: 0.218 kg (0.48 lbs.)           

Folded:25 cm x 16 cm x 0.5 cm (9.8 in x 6.3 in x 0.2 in )

Unfolded:25 cm x 33 cm x 0.1 cm (9.8 in x 13 in x 0.04 in )

Power capacity:  7watts / 5 volts / 2.4 amperes

The product has more than a 5-year lifespan, including full-time usage, unless it is broken or not exposed to any intense heat.  Additionally, it can go on to generate electricity even if it is broken unless it is split completely by having broken resistance capability. It is also most flexible, waterproof and dustproof panel in the market.  These capabilities make it definitely one of the sustainable eco-friendly energy sources. It has auto restart capability, too, so if charging is gone in case sunlight is lost due to cloud or something like that, charging can start automatically after getting any sunlight accordingly. 

There is only one USB type-a port, which is a common one on is compatible with almost all
devices, such as smartphones, tablets, kindles, speakers, smart watches, and power banks. And are several usage types like adhering to nonporous surfaces with suction cups, hanging it with carabiners or putting it upright. It is really easy to use; just plug the cable into the charger and turn it to the sun.

Besides, it has no
battery
so you don’t have to worry while you use it under the sun. Powerbanks provide a portable solution for added power to recharge devices. However, once they run out of power, they become useless, like our need for sunlight, but you do not have to wait to recharge to use it again like power banks. Just any sun rays make us happy to move on. Its charging is like a wall charger under full sun and like an USB port on the computer under the cloudy sun.

Price will be $24 for City, $22 for Walk, and $29 for Road by getting 25%25 off for Indiegogo early bird.

Communicate Better with your Audio Expert in 5 Minutes!

Welcome to the corner of Audio-Visual geek’s 

Today we will learn how to better communicate with the Sound Experts in the Audio-Visual sector. Are you a Videographer, podcaster or a film director? Then, you need to express your basic needs better for your audio professionals. You don’t need a long time to learn the basics.

Audio Vocabulary 

Technical professions such as acoustic engineering, sound engineering, audio engineering, mixer, sound designer, audio equipment testing, musician etc. has a unique way to explain the forms of a sound to each other. Communication in today’s world is not only limited between person to person but also for your own production sector.

Let’s learn how some basic vocabulary works in sound expert’s world:

What is a Sound?

The Sound is a vibration in the air. Another way to say is an oscillating wave. To describe the meeting of sound with the air, we use the term onde. The ondes provoke a sensation in our ears.

  1. The more the place where the sound is dense and strict, sound travels faster. For example, in the air sound travels 340 meters, in water 1400 meters, inside the metal 5000 meters fast.
  2. The sound has 4 main elements. The sound experts generally talk about a distinct sound. Which is characterized by how pitch (in French it is more used frequency), intensity (loudness), timbre and developing over time. The Pitch is a precept of a sound and can only be measured subjectively. But frequency has a scientific identification.

What is a Frequency?

Regardless of the object, vibration creates a sound wave. The back-and-forth vibrational motion of the particles creates 2 main sounds. A narrow band of frequencies and wideband frequencies. The number of cycles that a vibrating object completes in one second is called frequency. The other word of the cycle is a vibration.

  1. The most commonly used unit for frequency is the Hertz (abbreviated Hz). 1 vibration per second is 1 Hz. People can hear sounds at frequencies from about 20 Hz to 20,000 Hz. This is the maximum hearing range for humans. But dogs, for example, can hear approximately 40 Hz to 60,000 Hz.

sunisoid of a sound
physicsclassroom 

2. Sounds that pass the upper audible limit of human hearing of 20,000 Hz is called ultra-sounds. Lower than 20Hz is called infra-sounds. The more trebles frequency is increased, the more trebles move directionally. Between basses, mediums and trebles (In French Grave, Medium, Aigu) the basses go in every direction.

That’s why when neighbours complain about the noise, they complain about basses. Because we can prevent propagation of trebles but not basses.

What is a timbre?

Every instrument has its own voice (tone). The timbre of an instrument is made up of its unique vibrations. This uniqueness gives a personality to each instrument. Thanks to sound timbre we can differentiate the variation of tone qualities.

What is Intensity?

Intensity is a force that we use to play the instrument. The unity measure of Volume is decibels. The loudness of the sound is measured in decibels (dB).

There is no such a place like total silence on earth.

Just to give you an idea the desert has 0-10dB, sound studio has 10-20dB, missile rocket 180dB. If the sound doubles, it corresponds of 3dB of increase in its emission.

  1. Sound intensity level also known as acoustic intensity is defined as the power carried by sound waves per unit area in a direction perpendicular to that area. Sound intensity is the power per square meter. The common unit of power is the watt.
  2. Sound-level meter is the device for measuring the intensity of noise, music, and other sounds.

Why time matters?

The length or a duration of a concert depends on time. Especially if you need a specific music or a harmony in your orchestra it is vital to know when to terminate a sound. The unit of time in the audio-visual world is defined based on a second.

  1. In audio terms, the beginning of a sound is defined as an attack. So, the time that we put between the beginning of a sound and until it reaches its maximum is called an attack.

In order to become a great realisator (director), these basic concepts are your time-saving helpers. Having better Communication passes by learning how to use these vocabularies. The more you use the more you become a pro.

Main image: Pixabay

Dancing with the Enemy: Stellar duo in R Aquarii

This spectacular image — the second instalment in ESO’s R Aquarii Week — shows intimate details of the dramatic stellar duo making up the binary star R Aquarii. Though most binary stars are bound in a graceful waltz by gravity, the relationship between the stars of R Aquarii is far less serene. Despite its diminutive size, the smaller of the two stars in this pair is steadily stripping material from its dying companion — a red giant.

Years of observation have uncovered the peculiar story behind the binary star R Aquarii, visible at the heart of this image. The larger of the two stars, the red giant, is a type of star known as a Mira variable. At the end of their life, these stars start to pulsate, becoming 1000 times as bright as the Sun as their outer envelopes expand and are cast into the interstellar void.

The death throes of this vast star are already dramatic, but the influence of the companion white dwarf star transforms this intriguing astronomical situation into a sinister cosmic spectacle. The white dwarf — which is smaller, denser and much hotter than the red giant — is flaying material from the outer layers of its larger companion. The jets of stellar material cast off by this dying giant and white dwarf pair can be seen here spewing outwards from R Aquarii.

Occasionally, enough material collects on the surface of the white dwarf to trigger a thermonuclear nova explosion, a titanic event which throws a vast amount of material into space. The remnants of past nova events can be seen in the tenuous nebula of gas radiating from R Aquarii in this image.

R Aquarii lies only 650 light-years from Earth — a near neighbour in astronomical terms — and is one of the closest symbiotic binary stars to Earth. As such, this intriguing binary has received particular attention from astronomers for decades. Capturing an image of the myriad features of R Aquarii was a perfect way for astronomers to test the capabilities of the Zurich IMaging POLarimeter (ZIMPOL), a component on board the planet-hunting instrument SPHERE. The results exceeded observations from space — the image shown here is even sharper than observations from the famous NASA/ESA Hubble Space Telescope.

SPHERE was developed over years of studies and construction to focus on one of the most challenging and exciting areas of astronomy: the search for exoplanets. By using a state-of-the-art adaptive optics system and specialised instruments such as ZIMPOL, SPHERE can achieve the challenging feat of directly imaging exoplanets. However, SPHERE’s capabilities are not limited to hunting for elusive exoplanets. The instrument can also be used to study a variety of astronomical sources — as can be seen from this spellbinding image of the stellar peculiarities of R Aquarii.

‘Cosmic Serpent’ dancing with stellar winds

This serpentine swirl, captured by the VISIR instrument on ESO’s Very Large Telescope (VLT), has an explosive future ahead of it; it is a Wolf-Rayet star system, and a likely source of one of the most energetic phenomena in the Universe — a long-duration gamma-ray burst (GRB).

This is the first such system to be discovered in our own galaxy,” explains Joseph Callingham of the Netherlands Institute for Radio Astronomy (ASTRON), lead author of the study reporting this system. “We never expected to find such a system in our own backyard”.

The system, which comprises a nest of massive stars surrounded by a “pinwheel” of dust, is  officially known only by unwieldy catalogue references like 2XMM J160050.7-514245. However, the astronomers chose to give this fascinating object a catchier moniker — “Apep”.

Apep got its nickname for its sinuous shape, reminiscent of a snake coiled around the central stars. Its namesake was an ancient Egyptian deity, a gargantuan serpent embodying chaos — fitting for such a violent system. It was believed that Ra, the Sun god, would battle with Apep every night; prayer and worship ensured Ra’s victory and the return of the Sun.

GRBs are among the most powerful explosions in the Universe. Lasting between a few thousandths of a second and a few hours, they can release as much energy as the Sun will output over its entire lifetime. Long-duration GRBs — those which last for longer than 2 seconds — are believed to be caused by the supernova explosions of rapidly-rotating Wolf-Rayet stars.

Some of the most massive stars evolve into Wolf-Rayet stars towards the end of their lives. This stage is short-lived, and Wolf-Rayets survive in this state for only a few hundred thousand years — the blink of an eye in cosmological terms. In that time, they throw out huge amounts of material in the form of a powerful stellar wind, hurling matter outwards at millions of kilometres per hour; Apep’s stellar winds were measured to travel at an astonishing 12 million km/h.

These stellar winds have created the elaborate plumes surrounding the triple star system — which consists of a binary star system and a companion single star bound together by gravity. Though only two star-like objects are visible in the image, the lower source is in fact an unresolved binary Wolf-Rayet star. This binary is responsible for sculpting the serpentine swirls surrounding Apep, which are formed in the wake of the colliding stellar winds from the two Wolf-Rayet stars.

Compared to the extraordinary speed of Apep’s winds, the dust pinwheel itself swirls outwards at a leisurely pace, “crawling” along at less than 2 million km/h. The wild discrepancy between the speed of Apep’s rapid stellar winds and that of the unhurried dust pinwheel is thought to result from one of the stars in the binary launching both a fast and a slow wind — in different directions.

This would imply that the star is undergoing near-critical rotation — that is, rotating so fast that it is nearly ripping itself apart. A Wolf-Rayet star with such rapid rotation is believed to produce a long-duration GRB when its core collapses at the end of its life.

Astronomers revealed Super-Earth Orbiting Barnard’s Star

A planet has been detected orbiting Barnard’s Star, a mere 6 light-years away. This breakthrough — announced in a paper published today in the journal Nature — is a result of the Red Dots and CARMENES projects, whose search for local rocky planets has already uncovered a new world orbiting our nearest neighbour, Proxima Centauri.

The planet, designated Barnard’s Star b, now steps in as the second-closest known exoplanet to Earth. The gathered data indicate that the planet could be a super-Earth, having a mass at least 3.2 times that of the Earth, which orbits its host star in roughly 233 days. Barnard’s Star, the planet’s host star, is a red dwarf, a cool, low-mass star, which only dimly illuminates this newly-discovered world. Light from Barnard’s Star provides its planet with only 2% of the energy the Earth receives from the Sun.

Despite being relatively close to its parent star — at a distance only 0.4 times that between Earth and the Sun — the exoplanet lies close to the snow line, the region where volatile compounds such as water can condense into solid ice. This freezing, shadowy world could have a temperature of –170 ℃, making it inhospitable for life as we know it.

Named for astronomer E. E. Barnard, Barnard’s Star is the closest single star to the Sun. While the star itself is ancient — probably twice the age of our Sun — and relatively inactive, it also has the fastest apparent motion of any star in the night sky]. Super-Earths are the most common type of planet to form around low-mass stars such as Barnard’s Star, lending credibility to this newly discovered planetary candidate. Furthermore, current theories of planetary formation predict that the snow line is the ideal location for such planets to form.

Previous searches for a planet around Barnard’s Star have had disappointing results — this recent breakthrough was possible only by combining measurements from several high-precision instruments mounted on telescopes all over the world.

“After a very careful analysis, we are 99% confident that the planet is there,” stated the team’s lead scientist, Ignasi Ribas (Institute of Space Studies of Catalonia and the Institute of Space Sciences, CSIC in Spain). “However, we’ll continue to observe this fast-moving star to exclude possible, but improbable, natural variations of the stellar brightness which could masquerade as a planet.”

Among the instruments used were ESO’s famous planet-hunting HARPS and UVES spectrographs. “HARPS played a vital part in this project. We combined archival data from other teams with new, overlapping, measurements of Barnard’s star from different facilities,” commented Guillem Anglada Escudé (Queen Mary University of London), co-lead scientist of the team behind this result“The combination of instruments was key to allowing us to cross-check our result.”

The astronomers used the Doppler effect to find the exoplanet candidate. While the planet orbits the star, its gravitational pull causes the star to wobble. When the star moves away from the Earth, its spectrum redshifts; that is, it moves towards longer wavelengths. Similarly, starlight is shifted towards shorter, bluer, wavelengths when the star moves towards Earth.

Astronomers take advantage of this effect to measure the changes in a star’s velocity due to an orbiting exoplanet — with astounding accuracy. HARPS can detect changes in the star’s velocity as small as 3.5 km/h — about walking pace. This approach to exoplanet hunting is known as the radial velocity method, and has never before been used to detect a similar super-Earth type exoplanet in such a large orbit around its star.

“We used observations from seven different instruments, spanning 20 years of measurements, making this one of the largest and most extensive datasets ever used for precise radial velocity studies.” explained Ribas. ”The combination of all data led to a total of 771 measurements — a huge amount of information!”