RSS feed source: National Science Foundation

Using the Gemini North telescope in Hawaii, astronomers have captured an image of comet 3I/ATLAS, an interstellar object that was first detected on July 1, 2025, by the Asteroid Terrestrial-impact Last Alert System (ATLAS) for which the comet was named. The letter “I” means “interstellar,” and “3” indicates it is only the third object from another star system ever observed. The observations will help scientists study the characteristics of this rare object’s origin, orbit and composition.

Gemini North is in Hawaii and is one half of the International Gemini Observatory, funded in part by the U.S. National Science Foundation. The imagery reveals the comet’s compact coma — a cloud of gas and dust surrounding its icy nucleus.

“The sensitivity and scheduling agility of the International Gemini Observatory has provided critical early characterization of this interstellar wanderer,” says Martin Still, NSF program director for the International Gemini Observatory. “We look forward to a bounty of new data and insights as this object warms itself on sunlight before continuing its cold, dark journey between the stars.”

Credit: International Gemini Observatory/NOIRLab/NSF/AURA/K. Meech (IfA/U. Hawaii)/Image Processing: Jen Miller & Mahdi Zamani (NSF NOIRLab)

Interstellar comet 3I/ATLAS is captured in this image by the NSF-funded Gemini North telescope. The image shows the comet’s compact coma — a cloud of

Click this link to continue reading the article on the source website.

RSS feed source: National Science Foundation

Workers Exceeded Annual Dose Limit

Print View Posted on: 17 July 2025

Event Date: 08 April 2025 Event Type: Irradiation/Accelerator Facility Event Location: United States of America, Noblesville, Indiana/ Curium US LLC INES Rating: 2 (Final)

On April 8, 2025, two workers were performing waste handling activities in a hot cell basement of a cyclotron facility that produces strontium-82 from metallic rubidium targets. One worker removed a high-level liquid waste container from a shielded barrel and placed the unshielded container on the ground adjacent to the work area, where activities continued for approximately 15 minutes. Both workers’ electronic dosimeters alarmed for high dose soon after the container was removed from shielding; however, neither worker noticed these alarms because of the personal protective equipment they had donned, including respirators. Radiation surveys were performed upon entry to the area and prior to removing the container from shielding, but not again until after the workers left the area and noticed the excessive doses recorded on their electronic dosimeters. Radiation dose rates on contact with the waste container exceeded 9.99 Sv/hr (999 R/hr), which was the upper limit of available instrumentation. The licensee later

Click this link to continue reading the article on the source website.

RSS feed source: National Science Foundation

Artificial intelligence has transformed fields like medicine and finance, but it hasn’t gained much traction in manufacturing. Factories present a different challenge for AI: They are structured, fast-paced environments that rely on precision and critical timing. Success requires more than powerful algorithms; it demands deep, real-time understanding of complex systems, equipment and workflow. A new AI model designed specifically for manufacturing, seeks to address this challenge and revolutionize how factories operate.

With support from the U.S. National Science Foundation, a team led by California State University Northridge’s Autonomy Research Center for STEAHM has developed MaVila — short for Manufacturing, Vision and Language — an intelligent assistant that combines image analysis and natural language processing to help manufacturers detect problems, suggest improvements and communicate with machines in real time. Their goal is to create smarter, more adaptive manufacturing systems that can better support one of the most important sectors of the U.S. economy.

MaVila takes a different approach. Instead of relying on outside data, like information on the internet, it is trained with manufacturing-specific knowledge from the start. It learns directly from visual and language-based data in factory settings. The tool can “see” and “talk” — analyzing images of parts, describing defects in plain language, suggesting fixes and even communicating with machines to carry out automatic adjustments.

MaVila was trained using a specialized approach that required

Click this link to continue reading the article on the source website.