Calling all extension developers! With Manifest V3 picking up steam again, we wanted to provide some visibility into our current plans as a lot has happened since we published our last update.
Back in 2022 we released our initial implementation of MV3, the latest version of the extensions platform, in Firefox. Since then, we have been hard at work collaborating with other browser vendors and community members in the W3C WebExtensions Community Group (WECG). Our shared goals were to improve extension APIs while addressing cross browser compatibility. That collaboration has yielded some great results to date and we’re proud to say our participation has been instrumental in shaping and designing those APIs to ensure broader applicability across browsers.
We continue to support DOM-based background scripts in the form of Event pages, and the blocking webRequest feature, as explained in our previous blog post. Chrome’s version of MV3 requires service worker-based background scripts, which we do not support yet. However, an extension can specify both and have it work in Chrome 121+ and Firefox 121+. Support for Event pages, along with support for blocking webRequest, is a divergence from Chrome that enables use cases that are not covered by Chrome’s MV3 implementation.
Well what’s happening with MV2 you ask? Great question – in case you missed it, Google announced late last year their plans to resume their MV2 deprecation schedule. Firefox, however, has no plans to deprecate MV2 and will continue to support MV2 extensions for the foreseeable future. And even if we re-evaluate this decision at some point down the road, we anticipate providing a notice of at least 12 months for developers to adjust accordingly and not feel rushed.
As our plans solidify, future updates around our MV3 efforts will be shared via this blog. We are loosely targeting our next update after the conclusion of the upcoming WECG meeting at the Apple offices in San Diego. For more information on adopting MV3, please refer to our migration guide. Another great resource worth checking out is the recent FOSDEM presentation a couple team members delivered, Firefox, Android, and Cross-browser WebExtensions in 2024.
If you have questions, concerns or feedback on Manifest V3 we would love to hear from you in the comments section below or if you prefer, drop us an email.
The post Manifest V3 & Manifest V2 (March 2024 update) appeared first on Mozilla Add-ons Community Blog.
]]>San Francisco’s ballot initiative Proposition E is a dangerous and deceptive measure that threatens our privacy, safety, and democratic ideals. It would give the police more power to surveil, chase, and harm. It would allow the police to secretly acquire and use unproven surveillance technologies for a year or more without oversight, eliminating the hard-won protections backed by a majority of San Franciscans that are currently in place. Prop E is not a solution to the city’s challenges, but rather a threat to our rights and freedoms.
Don’t be fooled by the misleading arguments of Prop E's supporters. A group of tech billionaires have contributed a small fortune to convince San Francisco voters that they would be safer if surveilled. They want us to believe that Prop E will make us safer and more secure, but the truth is that it will do the opposite. Prop E will allow the police to use any surveillance technology they want for up to a year without considering whether it works as promised—or at all—or whether it presents risks to residents’ privacy or safety. Police only have to present a use policy after a year of free and unaccountable use, and absent a majority vote of the Board of Supervisors rejecting the policy, this unaccountable use could continue indefinitely. Worse still, some technologies, like surveillance cameras and drones, would be exempt from oversight indefinitely, putting the unilateral decision about when, where, and how to deploy such technology in the hands of the SFPD.
We want something different for our city. In 2019, with the support a wide range of community members and civil society groups including EFF, San Francisco’s Board of Supervisors took a historic step forward by passing a groundbreaking surveillance transparency and accountability ordinance through a 10-1 vote. The law requires that before a city department, including the police, acquire or use a surveillance technology, the department must present a use policy to the Board of Supervisors, which then considers the proposal in a public process that offers opportunity for public comment. This process respects privacy, dignity, and safety and empowers residents to make their voices heard about the potential impacts and risks.
Despite what Prop E proponents would have you believe, the city’s surveillance ordinance has not stopped police from acquiring new technologies. In fact, they have gained access to broad networks of live-feed cameras. Current law helps ensure that the police follow reasonable guidelines on using technology and mitigating potentially disparate harms. Prop E would gut police accountability from this law and return decision-making about how we are surveilled to closed spaces where unproven and unvetted vendor promises rule the narrative.
As San Francisco residents, we must stand up for ourselves and our city and vote No on Prop E. Voting No on Prop E is not only an easy choice, but also a necessary one. It is a choice that reflects our values and vision for San Francisco. It is a choice that shows that we will not let a million-dollar campaign of fear drive us to sacrifice our rights. Voting No on Prop E is a choice that proves we are unwilling to accept anything less than what we deserve: privacy, safety, and accountability.
March 5 is election day. Make your voice heard. Vote No on Prop E.
Some progress in the automotive industry is laudable. Cars are safer than ever and more efficient, too. But there are other changes we'd happily leave by the side of the road. That glossy "piano black" trim that's been overused the last few years, for starters. And the industry's overreliance on touchscreens for functions that used to be discrete controls. Well, the automotive safety organization European New Car Assessment Programme (Euro NCAP) feels the same way about that last one, and it says the controls ought to change in 2026.
"The overuse of touchscreens is an industry-wide problem, with almost every vehicle-maker moving key controls onto central touchscreens, obliging drivers to take their eyes off the road and raising the risk of distraction crashes," said Matthew Avery, Euro NCAP's director of strategic development.
"New Euro NCAP tests due in 2026 will encourage manufacturers to use separate, physical controls for basic functions in an intuitive manner, limiting eyes-off-road time and therefore promoting safer driving," he said.
]]>But a raffelesia by any other name smells the same. ShotSpotter had experienced a bit of quick uptake by law enforcement agencies, but in recent years, it was more well-known for having contracts terminated by major cities, allegedly altering data to better fit police report narratives, and suing reporters for covering nothing more than allegations made against the company in court.
No one’s going to stop calling ShotSpotter by its original Christian name. SoundThinking may be the new brand, but if anyone wants readers to instantly understand the tech being discussed, ShotSpotter is the name readers will see in headlines and articles’ bodies.
ShotSpotter remains problematic, despite the issuance of new company letterhead. Several cities and law enforcement agencies have discovered the tech contributes almost nothing to things like crime reduction or homicide investigations.
Critics of the tech have used what data is actually available to show cities flood poor neighborhoods heavily populated by minorities with ShotSpotter sensors while leaving other, whiter, richer areas untouched. ShotSpotter is just more tech-washing of biased policing — a self-fulfilling prophecy where biased cops can point to alleged spotted shots in the places they’ve placed the most sensors as justification for more harassment and subjugation of poor minority communities.
But it’s not just the supposed “wrong” side of the tracks being flooded with ShotSpotter sensors (although there’s still plenty of that). ShotSpotter has managed to insert itself anywhere government agencies think should be wired for (gun) sound. Leaked data shared with Wired indicates there are plenty of deployments the public doesn’t know about, many of which don’t involve neighborhoods assumed to be riddled with violent crime.
According to the document, SoundThinking equipment has been installed at more than a thousand elementary and high schools; they are perched atop dozens of billboards, scores of hospitals, and within more than a hundred public housing complexes. They can be found on significant US government buildings, including the headquarters of the Federal Bureau of Investigation, the Department of Justice, and the US Court of Appeals in Washington, DC.
Any place where someone feels they might want to be notified of a potential gunshot appears to have been infected with ShotSpotter tech. The data obtained by Wired shows ShotSpotter sensors have been installed in 84 cities and 34 states. Several cities are engaged in mass deployment.
Nine cities have more than 500 sensors installed, including Albuquerque, New Mexico; Chicago, Illinois; Washington, DC; San Juan, Puerto Rico; and Las Vegas, Nevada. The document does not indicate whether the list of sensors is comprehensive.
One of the cities listed (Chicago) is in the process of ditching the tech after an investigation showed the tech was doing little more than setting tax dollars on fire. As for Albuquerque, questionable shot-spotting tech is perhaps the least of its problems, considering the PD is still under a DOJ consent decree prompted by its officers’ routine excessive force deployments and rights violations, not to mention the current DUI scandal that saw law enforcement officers’ homes raided by FBI agents.
The data also confirms ShotSpotter deployment is just as biased as the law enforcement agencies deploying the gunshot sensors.
Nearly three-quarters of these neighborhoods are majority nonwhite, and the average household earns a little more than $50,000 a year.
These facts aren’t being denied by ShotSpotter.
In an interview, Tom Chittum, senior vice president of forensic services at SoundThinking, tells WIRED that he is “willing to accept that our findings are true” and confirmed that the document is likely authentic.
That being said, ShotSpotter may aid in biased policing efforts, but it cannot be blamed for discriminatory placement. While the CEO says ShotSpotter performs the installation, it only does so after consulting with the local law enforcement agencies that have purchased the tech. Once cops tell ShotSpotter where to place the sensors, the company works with local businesses, utilities, and even private homeowners to install the sensors — the latter of which sometimes involves the company giving people “gift cards” in exchange for temporary access to their property.
Certain areas or properties may be blanketed with sensors but that doesn’t mean useful data is being generated. According to the data obtained by Wired, nearly 10% of the sensors were categorized as broken or out of service. That might explain why the Chicago PD and its 3,500+ sensors were left in the dark when shooters fired 55 shots at a gyro shop, wounding two people.
Despite this leak, ShotSpotter’s not going anywhere. While it affirms all the worst things people assume about the tech, these negatives are often seen as positives by their customers. The flooding of poor neighborhoods with sensors guarantees more gunshots will be detected in the places cops already consider to be filled with criminals. The expansion to other markets (hospitals, schools) allows the company to claim it’s not in the business of aiding and abetting biased policing efforts. And the new brand will, at some point, allow the company to distance itself from negative press that utilizes its former name.
ShotSpotter isn’t making policing any better. It’s only contributing more faulty data that will be fed to other machines to perpetuate biased policing efforts. And it’s arguably not making anyone any safer, despite the deployment of hundreds of thousands of sensors across the nation. Somehow, it will continue to make money, despite it often appearing to be indistinguishable from doing nothing at all.
]]>On Thursday at dawn, Israeli troops unleashed a barrage of gunfire on a crowd of starving Palestinians waiting for aid trucks in Gaza City, killing over one hundred people and wounding more than one thousand others. The death toll is expected to rise as most hospitals in Gaza have ceased operating, having run out of fuel, medicine, and blood.
Footage shows Israeli soldiers firing indiscriminately at thousands of civilians who gathered at al-Nabulsi roundabout at al-Rasheed Street to receive flour from aid trucks. Medical sources report that most victims were shot directly in the head, chest, or stomach. Jadallah al-Shafei, the nursing director at al-Shifa hospital in northern Gaza, told Al Jazeera: “All injuries result from gunfire and artillery shells; [Israeli] claims of a stampede are entirely fabricated.”
Israeli tanks ran over dead and wounded bodies. Many victims were brought to hospitals in donkey carts, as ambulances could not reach the scene to collect all the dead and wounded.
The scene resembled a slaughterhouse. Most of the victims were children. A heartbroken mother was heard screaming through the crowds: “My girl is gone; she’s been starving for seven days.” A woman at Kamal Adwan hospital was pleading with the world: “We are under siege. Take pity on us. Ramadan is coming soon. People should look at us. Pity us.”
The massacre is a war crime on top of a war crime, as Israel slaughtered Palestinian civilians whom it has been starving for months, and whose only crime was queuing up to receive flour for their families. Palestinian officials have described the carnage as a “cold-blooded massacre.” Palestinians have dubbed it the Flour Massacre — or perhaps more fittingly, the Red Flour Massacre, in reference to the bloodstained flour left scattered on the ground.
The UN Security Council has convened an emergency meeting. The United Nations Relief and Works Agency (UNRWA) chief described “another day from hell” in Gaza, while UN aid chief, Martin Griffiths, lamented the “life draining out of Gaza at terrifying speed.” Following the massacre, Colombian president Petro Gustavo has suspended arms purchases from Israel, saying: “The whole world should blockade [Benjamin] Netanyahu.”
Meanwhile, Itamar Ben-Gvir, Israel’s national security minister, hailed the soldiers who committed the massacre as “heroes,” pledging total support for Israeli troops in Gaza. Using US-made drones, Israeli forces recorded the carnage from the air for fun. Israeli Telegram channels have celebrated the massacre of starving Palestinians, cheering the prospect of cannibalism. Many Israelis have been advocating for starving Palestinians in Gaza.
The Palestinian death toll has now surpassed thirty thousand, most of them women and children. Over seventy thousand people have been wounded. Nearly two million civilians have been displaced. Half of the population are starving. Several hundred thousand Palestinians are believed to remain in northern Gaza despite Israeli orders to evacuate the area; many have been reduced to eating animal fodder for survival. Footage of bone-thin children vomiting animal food and then dying has shocked observers. Gaza doctors have been warning that growing famine in Gaza is “turning children into skeletons.”
The world is witnessing the brutal dehumanization of an entire people unfold in broad daylight, as thousands of starving Palestinians have been crowding daily along the Gaza beach, waving desperately for aid planes as they air-drop food far and deep into the sea.
International organizations are acting helpless. Aid groups say it has become nearly impossible to deliver humanitarian aid in Gaza because of the presence of the Israeli military. Early this month, the World Food Program announced that it was pausing deliveries to the North because of the growing chaos and relentless bombing, despite having warned that “famine is imminent.”
For nearly five months, and despite international appeals to allow aid into Gaza, Israel has deprived the besieged Strip of food, water, and medicine. It has sealed the Rafah Crossing with Egypt, while Israeli settlers and soldiers continue to block aid trucks at Israel’s Kerem Shalom border crossing. Meanwhile, crowds of Israeli settlers, who have been demanding to be allowed to resettle Gaza, have breached the Erez Crossing near the border wall with Gaza in an attempt to build settlements on the ruins of displaced Palestinians.
Paying lip service to Palestinian lives, US president Joe Biden has said that killing more than one hundred Palestinians near aid trucks will complicate cease-fire talks. But the truth is that the Biden administration has itself to blame for these atrocities, having vetoed three UN resolutions calling for a cease-fire in Gaza, while deploying US Air Force teams to Israel to assist in its war crimes and genocide in Gaza.
The United States has also been a partner in starving Palestinians in Gaza, which constitutes a war crime, a crime against humanity, and an act of genocide. The Biden administration continues to halt aid to UNRWA, even as US officials have been warning that Gaza is “turning into Mogadishu.” Acting helpless before Israel, the United States is now exploring the possibility of “air-dropping” food from US military planes into Gaza — rather than attempting to stop the assault that makes those airdrops necessary.
The Rasheed Street massacre underscores Israel’s flagrant mockery of international justice. It comes one month after the International Court of Justice ordered Israel to stop its “plausible genocide” in Gaza. It comes barely one day after the European Parliament called for a permanent cease-fire in Gaza.
Emboldened by US complicity, Israel continues to act with impunity in Gaza, in a blatant violation of international laws and norms. But as Israel continues to enjoy the unconditional support of the Biden administration, it’s hard to see why it should stop massacring Palestinians.
As we celebrate Fair Use/Fair Dealing Week, we are reminded of all the ways these flexible copyright exceptions enable libraries to preserve materials and meet the needs of the communities they serve. Indeed, fair use is essential to the functioning of libraries, and underlies many of the ordinary library practices that we all take for granted. In this blog post, we wanted to describe a few of the ways the fair use doctrine has helped us build our library.
The Internet Archive has been archiving the web since the mid-1990’s. Our web collection now includes more than 850 billion web pages, with hundreds of millions added each day. The Wayback Machine is a free service that lets people visit these archived websites. Users can type in a URL, select a date range, and then begin surfing on an archived version of the web.
Web archives are used for a variety of important purposes, many of which are themselves fair uses. News reporting and investigative journalism is one such use of the Wayback Machine. Indeed, thousands of news articles have relied upon historical versions of the web from the Wayback Machine. Just last week, 13 links to the Wayback Machine were used in a CNN story about an Ohio GOP Senate candidate’s previous statements that were critical of former President Trump. Our web archive also becomes an urgent backup for media sites that are shut down suddenly, whether by authoritarian governments or for other reasons, often becoming the only accessible source both for the authors of these stories and for the public. Another important purpose web archives can serve is as evidence in legal disputes. Attorneys use the Wayback Machine in their daily practice for evidentiary and research purposes. In 2023 alone, the Internet Archive attested to 450 affidavits in cases where Wayback Machine captures were used as evidence in court.
The Wayback Machine also makes other parts of the web, such as Wikipedia, more useful and reliable. To date, the Internet Archive has been able to repair over 19 million broken links, URLs, that had returned a 404 (Page Not Found) error message, from 320 different Wikipedia language editions. There are many reasons, including bit rot and content drift, why links stop working. Restoring links ensures that Wikipedia remains an accurate and verifiable source of information for the public good. And we hope to build new tools and partnerships to help create a more dependable knowledge ecosystem as more and more content on the web is created by generative AI.
The Fair Use doctrine is broadly considered to be what makes web archiving possible. Without it, much of our knowledge and cultural heritage–huge amounts of which are now artifacts in digital form–would be at risk. In today’s chaotic information ecosystem, safeguarding this material in an open, accessible, and transparent way is vital for history and vital for democracy.
Whether you are an individual who has rendered an appliance useless because you lost the instructions, or a professional mechanic looking to fix an old vehicle, owners’ manuals are invaluable. As the right to repair movement has amply demonstrated, copyright should not stand as an obstacle to using machines you’ve bought and paid for. This is a place where fair use can shine.
Over the years, the Internet Archive has received manuals, instruction sheets and informational pamphlets of all kinds. The Manuals collection has well over a million items—or users to access 24/7 at no cost. This resource gives people the right to repair and extend the life of their products. Whether you are a rocket scientist needing to operate your space shuttle, a mechanic who needs to repair a vintage VW Bug, or a curious kid trying to fix up your mom’s old computer, having free online access to the technical documentation you need is essential. And in many cases, there would appear to be no other way to get access to this crucial information.
Some preserved manuals are a single printed page with poorly constructed diagrams. Others are multi-volume tomes that give exacting details on operation of a complex piece of machinery. These materials are more than instructions or a list of components. They reflect the priorities and approaches that companies and individuals take with products, as well as the artistic and visual efforts to make an item clear to the reader.
This collection is a cool example of how fair use provides a framework for the Internet Archive to share critical knowledge with consumers. At the same time, it provides a historical timeline of sorts for innovation and the development of technology.
From preserving our digital history to providing access to manuals of obsolete devices, fair use helps libraries like ours serve our community. And while there are no doubt a variety of commercial projects that properly rely on fair use, fair use is at heart about the public good. As we celebrate Fair Use week, we should remember the crucial role it plays, and ensure that we preserve and protect fair use for the good of future generations. For more on events and news on Fair Use/Fair Dealing Week, visit FairUseWeek.org.
]]>Listening to police and fire calls used to be a pretty simple proposition: buy a scanner, punch in some frequencies — or if you’re old enough, buy the right crystals — and you’re off to the races. It was a pretty cheap and easy hobby, all things considered. But progress marches on, and with it came things like trunking radio and digital modulation, requiring ever more sophisticated scanners, often commanding eye-watering prices.
Having had enough of that, [Top DNG] decided to roll his own digital trunking scanner on the cheap. The first video below is a brief intro to the receiver based on the combination of an RTL-SDR dongle and a Raspberry Pi 5. The Pi is set up in headless mode and runs sdrtrunk, which monitors the control channels and frequency channels of trunking radio systems, as well as decoding the P25 digital modulation — as long as it’s not encrypted; don’t even get us started on that pet peeve. The receiver also sports a small HDMI touchscreen display, and everything can be powered over USB, so it should be pretty portable. The best part? Everything can be had for about $250, considerably cheaper than the $600 or so needed to get into a purpose-built digital trunking scanner — we’re looking at our Bearcat BCD996P2 right now and shedding a few tears.
The second video below has complete details and a walkthrough of a build, from start to finish. [Top DNG] notes that sdrtrunk runs the Pi pretty hard, so a heat sink and fan are a must. We’d probably go with an enclosure too, just to keep the SBC safe. A better antenna is a good idea, too, although it seems like [Top DNG] is in the thick of things in Los Angeles, where LAPD radio towers abound. The setup could probably support multiple SDR dongles, opening up a host of possibilities. It might even be nice to team this up with a Boondock Echo. We’ve had deep dives into trunking before if you want more details.
]]>
There are some 2,078 planters across the neighborhood, according to a block-by-block count conducted by Mission Local reporters. About 200 of these are the large metallic containers and another 400 are wooden barrels. There are 155 wooden troughs. The remaining 1,307 are a mixture of receptacles ranging in size from tiny clay pots to massive sidewalk gardens filled with an assortment of vessels.
Kudos to the web designers for making this article be both: an interactive scrolling 3d-ish map thingy; and also, completely legible in Reader Mode! One usually does not get both.
Previously, previously, previously, previously, previously, previously, previously, previously.
]]>There’s been plenty of bad news regarding federal legislation in 2023. For starters, Congress has failed to pass meaningful comprehensive data privacy reforms. Instead, legislators have spent an enormous amount of energy pushing dangerous legislation that’s intended to limit young people’s use of some of the most popular sites and apps, all under the guise of protecting kids. Unfortunately, many of these bills would run roughshod over the rights of young people and adults in the process. We spent much of the year fighting these dangerous “child safety” bills, while also pointing out to legislators that comprehensive data privacy legislation would be more likely to pass constitutional muster and address many of the issues that these child safety bills focus on.
But there’s also good news: so far, none of these dangerous bills have been passed at the federal level, or signed into law. That's thanks to a large coalition of digital rights groups and other organizations pushing back, as well as tens of thousands of individuals demanding protections for online rights in the many bills put forward.
The biggest danger has come from the Kids Online Safety Act (KOSA). Originally introduced in 2022, it was reintroduced this year and amended several times, and as of today, has 46 co-sponsors in the Senate. As soon as it was reintroduced, we fought back, because KOSA is fundamentally a censorship bill. The heart of the bill is a “Duty of Care” that the government will force on a huge number of websites, apps, social networks, messaging forums, and online video games. KOSA will compel even the smallest online forums to take action against content that politicians believe will cause minors “anxiety,” “depression,” or encourage substance abuse, among other behaviors. Of course, almost any content could easily fit into these categories—in particular, truthful news about what’s going on in the world, including wars, gun violence, and climate change. Kids don’t need to fall into a wormhole of internet content to get anxious; they could see a newspaper on the breakfast table.
Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote.
KOSA will empower every state’s attorney general as well as the Federal Trade Commission (FTC) to file lawsuits against websites or apps that the government believes are failing to “prevent or mitigate” the list of bad things that could influence kids online. Platforms affected by KOSA would likely find it impossible to filter out this type of “harmful” content, though they would likely try. Online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave, and institute age verification tools to ensure that it happens. Age verification systems are surveillance systems that threaten everyone’s privacy. Mandatory age verification, and with it, mandatory identity verification, is the wrong approach to protecting young people online.
The Senate passed amendments to KOSA later in the year, but these do not resolve its issues. As an example, liability under the law was shifted to be triggered only for content that online services recommend to users under 18, rather than content that minors specifically search for. In practice, that means platforms could not proactively show content to young users that could be “harmful,” but could present that content to them. How this would play out in practice is unclear; search results are recommendations, and future recommendations are impacted by previous searches. But however it’s interpreted, it’s still censorship—and it fundamentally misunderstands how search works online. Ultimately, no amendment will change the basic fact that KOSA’s duty of care turns what is meant to be a bill about child safety into a censorship bill that will harm the rights of both adult and minor users.
Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote. In fact, even many of the young people it is intended to help are vehemently against it. We will continue to oppose it in the new year, and urge you to contact your congressperson about it today.
KOSA wasn’t the only child safety bill Congress put forward this year. The Protecting Kids on Social Media Act would combine some of the worst elements of other social media bills aimed at “protecting the children” into a single law. It includes elements of KOSA as well as several ideas pulled from state bills that have passed this year, such as Utah’s surveillance-heavy Social Media Regulations law.
When originally introduced, the Protecting Kids on Social Media Act had five major components:
EFF is opposed to all of these components, and has written extensively about why age verification mandates and parental consent requirements are generally dangerous and likely unconstitutional.
In response to criticisms, senators updated the bill to remove some of the most flagrantly unconstitutional provisions: it no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media.
One silver lining to this fight is that it has activated young people.
Still, it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition. It would still prohibit children under 13 from using any ad-based social media, despite the vast majority of content on social media being lawful speech fully protected by the First Amendment. If enacted, the bill would suffer a similar fate to a California law struck down in 2011 for violating the First Amendment, which was aimed at restricting minors’ access to violent video games.
One silver lining to this fight is that it has activated young people. The threat of KOSA, as well as several similar state-level bills that did pass, has made it clear that young people may be the biggest target for online censorship and surveillance, but they are also a strong weapon against them.
The authors of these bills have good, laudable intentions. But laws that would force platforms to determine the age of their users are privacy-invasive, and laws that restrict speech—even if only for those who can’t prove they are above a certain age—are censorship laws. We expect that KOSA, at least, will return in one form or another. We will be ready when it does.
This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.
Even if only "commercial activities" are in the scope of CRA, the Free Software community - and as a consequence, everybody - will lose a lot of small projects. CRA will force many small enterprises and most probably all self employed developers out of business because they simply cannot fulfill the requirements imposed by CRA. Debian and other Linux distributions depend on their work. If accepted as it is, CRA will undermine not only an established community but also a thriving market. CRA needs an exemption for small businesses and, at the very least, solo-entrepreneurs]]>
Since China State Shipbuilding Corporation (CSSC) unveiled its KUN-24AP containership at the Marintec China Expo in Shanghai in early December of 2023, the internet has been abuzz about it. Not just because it’s the world’s largest container ship at a massive 24,000 TEU, but primarily because of the power source that will power this behemoth: a molten salt reactor of Chinese design that is said to use a thorium fuel cycle. Not only would this provide the immense amount of electrical power needed to propel the ship, it would eliminate harmful emissions and allow the ship to travel much faster than other containerships.
Meanwhile the Norwegian classification society, DNV, has already issued an approval-in-principle to CSSC Jiangnan Shipbuilding shipyard, which would be a clear sign that we may see the first of this kind of ship being launched. Although the shipping industry is currently struggling with falling demand and too many conventionally-powered ships that it had built when demand surged in 2020, this kind of new container ship might be just the game changer it needs to meet today’s economic reality.
That said, although a lot about the KUN-24AP is not public information, we can glean some information about the molten salt reactor design that will be used, along with how this fits into the whole picture of nuclear marine propulsion.
The idea of nuclear marine propulsion was pretty much coined the moment nuclear reactors were conceived and built. Over the past decades, quite a few have been constructed, with some – like commercial shipping and passenger ships – being met with little success. Meanwhile, nuclear propulsion is literally the only way that a world power can project military might, as diesel-electric submarines and conventionally powered aircraft carriers lack the range and scale to be of much use.
The primary reason for this is the immense energy density of nuclear fuel, that depending on the reactor configuration can allow the vessel to forego refueling for years, decades, or even its entire service life. For US nuclear-powered aircraft carriers, the refueling is part of its mid-life (~20 years) shipyard period, where the entire reactor module is lifted out through a hole cut in the decks before a fresh module is put in. Because of this abundance of power there never is a need to ‘save fuel’, leaving the vessel free to ‘gun it’ in so far as the rest of the ship’s structures can take the strain.
Theoretically the same advantages could be applied to civilian merchant vessels like tankers, cargo and container ships. But today, only the Soviet-era Sevmorput is still in active duty as part of Rosatom’s Atomflot that also includes nuclear powered icebreakers. After having been launched in 1986, Sevmorput is currently scheduled to be decommissioned in 2024, after a lengthy career that is perhaps ironically mostly characterized by the fact that very few people today are even aware of its existence, despite making regular trips between various Russian harbors, including those on the Baltic Sea.
The KLT-40 nuclear reactor (135 MWth) in the Sevmorput is very similar to the basic reactor design that powers a US aircraft carrier like the USS Nimitz (2 times A4W reactor for 550 MWth). Both are pressurized water reactors (PWRs) not unlike the PWRs that make up most of the world’s commercial nuclear power stations, differing mostly in how enriched their uranium fuel is, as this determines the refueling cycle.
Here the KUN-24AP container ship would be a massive departure with its molten salt reactor. Despite this seemingly odd choice, there are a number of reasons for this, including the inherent safety of an MSR, the ability to refuel continuously without shutting down the reactor, and a high burn-up rate, which means very little waste to be filtered out of the molten salt fuel. The roots for the ship’s reactor would appear to be found in China’s TMSR-LF program, with the TMSR-LF1 reactor having received its operating permit earlier in 2023. This is a fast neutron breeder, meaning that it can breed U-233 from thorium (Th-232) via neutron capture, allowing it to primarily run on much cheaper thorium rather than uranium fuel.
Making a very large container ship is not the hard part, as the rapid increase in the number of New Panamax and larger container ships, like the ~24,000 TEU Evergreen A-class demonstrate. The main problem ultimately becomes propelling it through the water with any kind of momentum and control.
Having a direct drive shaft to a propeller requires that you have enough shaft power, which requires a power plant that can provide the necessary torque directly or via a gearbox. Options include using a big generator and electric propulsion, or to use boilers and steam turbines. Yet as great as boilers and steam turbines are for versatility and power, they are expensive to run and maintain, which is why the Evergreen G-series container ships have a 75,570 kW combustion engine, while the Kitty Hawk has 210 MW and the Nimitz has 194 MW of installed power, with the latter having enough steam from its two A4W reactors for 104 MW per pair of propellers, leaving a few hundred MW of electrical power for the ship’s systems.
This amount of power across four propellers allow these aircraft carriers to travel at 32 knots, while container ships typically travel between 15 to 25 knots, with the increased fuel usage from fast steaming forming a strong incentive to travel at slower speeds, 18-20 knots, when deadlines allow. Although fuel usage is also a concern for conventionally powered ships like the Kitty Hawk, the nuclear Nimitz has effectively unlimited fuel for 20-25 years and thus it can go anywhere as fast as the rest of the ship and its crew allows.
Today’s shipping industry finds itself as mentioned earlier in a bind, even before recent events that caused both the Panama and Suez canals to be more or less off-limits and forcing cargo ships to fall back to early 19th century shipping routes around Africa and South America. With faster cargo ships traveling at or over 30 knots rather than about 20, the detour around Africa rather than via the Suez Canal could be massively shortened, providing significant more flexibility. If this offering also comes at no fuel cost penalty, you suddenly got the attention of every shipping company in the world, and this is where the KUN-24AP’s unveiling suddenly makes a lot of sense.
Naturally, there is a lot of concern when it comes to anything involving ‘nuclear power’. Yet many decades of nuclear propulsion have shown the biggest risk to be the resistance against nuclear marine propulsion, with a range of commercial vessels (Mutsu, Otto Hahn, Savannah) finding themselves decommissioned or converted to diesel propulsion not due to accidents, but rather due to harbors refusing access on ground of the propulsion, eventually leaving the Sevmorput as the sole survivor of this generation outside of vessels operated by the world’s naval forces. These same naval forces have left a number of sunken nuclear-powered submarines scattered on the ocean floor, incidentally with no ill effects.
Although there are still many details which we don’t know yet about the KUN-24AP and its power plant, the TMSR-LF-derived MSR is likely designed to be highly automated, with the adding of fresh thorium salts and filtering out of gaseous and solid waste products not requiring human intervention or monitoring. Since the usual staffing of container ships already features a number of engineering crew members who keep an eye on the combustion engine and the other systems, this arrangement is likely to be maintained, with an unknown amount of (re)training to work with the new propulsion system required.
With Samsung Heavy Industries, another heavy-shipping giant, already announcing its interest in 2021 for nuclear power plant technology based around a molten salt reactor, the day when container ships quietly float into harbors around the world with no exhaust gases might be sooner than we think, aided by a lot more acceptance from insurance companies and harbor operators than half a century ago.
(Top image: the proposed KUN-24AP container ship, courtesy of CSSC)
]]>In 2020, the North Carolina Green Party did have a presidential primary, and the ballot listed Howie Hawkins and uncommitted.
The article also says the No Labels Party isn’t using its presidential primary either, but the law does not permit a new party to have a primary, so even if No Labels had wanted a presidential primary, it could not have had one.
]]>Because we collect under non-print legal deposit [the regulation that grants the British Library a copy of every work published in the UK], the idea is we collect everything that is published.
However, there is a fundamental difference between collecting analogue publications such as books, and those that are born digital. Where the former can be appreciated directly, the latter require a platform of some kind. That might be an operating system, a browser plug in, or specific hardware such as a game console. Rossi warns:
It’s easy to take for granted the technology we have access to right now. Apps and tablets are still very much alive and happening, and people might not realise how fragile some of these formats are, because they are reliant on bespoke software. It’s hard to think far in the future and realise that the kind of technology, and even the way we read, might be very different in a few years’ time.
In practice, that means that as platforms become obsolete and are phased out, they can take with them the digital artefacts that depend on those platforms to be accessed. Technical solutions exist that can help deal with these issues. For example, code stored on old physical formats can be transferred to new ones, and software emulators can help keep digital artefacts alive that might otherwise be impossible to access. That’s the good news.
The bad news is that there’s a big problem in the form of copyright law. Generally speaking, technical solutions can only be applied with the permission of the copyright holders. If the latter can’t be found – hard enough with physical books, often impossible for complex pieces of outdated software – these characteristic digital creations may be doomed to disappear. The severity of punishments for copyright infringement are so disproportionate, that researchers and curators are understandably unwilling to risk being taken to court for their preservation work – good intentions are no defense. Moreover, it’s a threat that continues to hang over cultural institutions for decades as a result of copyright’s absurdly long term.
Even when copyright protection on a piece of software or game finally expires after a century or more of being locked away, there is a high probability that they will remain inaccessible. The media on which they are stored may have degraded, or there may be no hardware available on which to run them or to aid programmers in creating emulators.
As Walled Culture the book (free digital versions available) noted with other examples, the problems faced globally by cultural institutions when preserving born-digital cultural artefacts underline the profound mis-match between copyright and the modern world.
Follow me @glynmoody on Mastodon. Originally published to Walled Culture.
]]>The digital library is a staunch supporter of a free and open Internet and began meticulously archiving the web over a quarter century ago.
In addition to archiving the web, IA also operates a library that offers a broad collection of digital media, including books. Staying true to the centuries-old library concept, IA patrons can also borrow books that are scanned and digitized in-house.
The self-scanning service is different from the licensing deals other libraries enter into. Not all publishers are happy with IA’s approach which triggered a massive legal battle two years ago.
Publishers Hachette, HarperCollins, John Wiley, and Penguin Random House filed a lawsuit, equating IA’s controlled digital lending (CDL) operation to copyright infringement. Earlier this year a New York Federal court concluded that the library is indeed liable for copyright infringement.
The Court’s decision effectively put an end to IA’s self-scanning library, at least for books from the publishers in suit. However, IA is not letting this go without a fight and last week the non-profit filed its opening brief at the Second Circuit Court of Appeals, hoping to reverse the judgment.
IA doesn’t stand alone in this legal battle. As the week progressed, several parties submitted amicus curiae briefs to the court supporting IA’s library. This includes the Authors Alliance.
The Authors Alliance represents thousands of members, including two Nobel Laureates, a Poet Laureate of the United States, and three MacArthur Fellows. All benefit from making their work available to a broad public.
If IA’s lending operation is outlawed, the authors fear that their books would become less accessible, allowing the major publishers to increase their power and control.
The Alliance argues that the federal court failed to take the position of authors into account, focusing heavily on the publishers instead. However, the interests of these groups are not always aligned.
“Many authors strongly oppose the actions of the publishers in bringing this suit because they support libraries and their ability to innovate. Authors rely on libraries to reach readers and many are proud to have their works preserved and made available through libraries in service of the public.
“Because these publishers have such concentrated market power […], authors that want to reach wide audiences rarely have the negotiating power to retain sufficient control from publishers to independently authorize public access like that at issue here,” the Alliance adds.
This critique from the authors is not new. Hundreds of writers came out in support of IA’s digital book library at an earlier stage of this lawsuit, urging the publishers to drop their case.
The publishers didn’t listen to these concerns. They believe that IA’s library is disrupting the “ecosystem” and “market equilibrium” of ebook sales. However, the Authors Alliance now counters that the system is already out of whack, as publishers enjoy too much power.
“That ecosystem has long been out of balance, due not to the IA’s activities, but to these publishers’ leveraging of their power to insist on a marketplace in which they exercise almost absolute control over access, preservation, and research,” the Alliance notes.
According to the Authors Alliance, IA’s digital ebook library is a prime example of a service that should be permitted to operate as fair use, as it benefits both writers and readers.
In a separate amicus brief, several prominent legal and copyright scholars, many of whom hold professor titles, raise similar arguments. They believe that IA’s lending system is not that different from the physical libraries that are an integral part of culture.
“Libraries have always been free under copyright law to lend materials they own as they see fit. This is a feature of copyright law, not a bug,” the brief reads.
What is new here, is that publishers now assert full control over how their digital books are treated. Instead of allowing libraries to own copies, they have to license them, which makes it impossible to add them to the permanent archive.
“The major publishers refuse to sell digital books to libraries, forcing them to settle for restrictive licenses of digital content rather than genuine ownership. Moreover, publishers insist they can prevent libraries from scanning their lawfully purchased physical books and lending the resulting digital copies.”
The scholars see IA’s library as fair use and note that the lower court ignored the long history of nonprofit library lending. It placed too much emphasis on the interests of publishers, largely ignoring the public benefits.
Thus far, the Court of Appeals has received four amicus briefs in support of IA’s library. In addition to the two mentioned above, others include a joint submission from the Center for Democracy & Technology, Library Freedom Project and Public Knowledge.
These groups also stress that the court focused too heavily on the publishers’ bottom line, while failing to properly take the rights of consumers into account.
“The district court should have more carefully considered the socially beneficial purposes of library-led CDL, which include protecting patrons’ ability to access digital materials privately, and the harm to copyright’s public benefit of disallowing libraries from using CDL.”
This sentiment is shared in the fourth amicus brief from information scholars and historians Kevin L. Smith and Will Cross, who also argue that publishers have too much power as it is.
The scholars believe that IA’s scan-and-lend library is a prime example of fair use, placing the interests of all stakeholders more closely into balance.
“Here, market failure is evident: one side (the publishers) has such a dominant position that they control all the terms of any sale, without any countervailing forces to balance the market.
“Fair use was designed to address precisely this type of market failure. Thus, CDL should be upheld under fair use. Otherwise, a decision against CDL would harm the public mission of libraries and perpetuate the existing market failure,” they add.
With no shortage of support for the Internet Archive, the stakes of this legal battle are clear. Thus far, the publishers have yet to file their response, but it’s likely that they will also receive support from third parties.
—
The amicus briefs cited in this article are all available below (pdf)
– Authors Alliance
– Copyright scholars
– CDT, Library Freedom Project, and Public Knowledge
– Kevin L. Smith and Will Cross
From: TF, for the latest news on copyright battles, piracy and more.
]]>Humans tend to think that we are the most intelligent life-forms on Earth, and that we’re largely followed by our close relatives such as chimps and gorillas. But there are some areas of cognition in which homo sapiens and other primates are not unmatched. What other animal’s brain could possibly operate at a human’s level, at least when it comes to one function? Birds—again.
This is far from the first time that bird species such as corvids and parrots have shown that they can think like us in certain ways. Jackdaws are clever corvids that belong to the same family as crows and ravens. After putting a pair of them to the test, an international team of researchers saw that the birds’ working memory operates the same way as that of humans and higher primates. All of these species use what’s termed “attractor dynamics,” where they organize information into specific categories.
Unfortunately for them, that means they also make the same mistakes we do. "Jackdaws (Corvus monedula) have similar behavioral biases as humans; memories are less precise and more biased as memory demands increase,” the researchers said in a study recently published in Communications Biology.
]]>We must reject the idea that businesses deserve the freedom to do facial recognition on video images in and around their premises, or the freedom to make videos and transmit them elsewhere, or regularly save them for more than a few weeks, This supposed freedom conflicts with the privacy that humans beings deserve, and it facilitates the repression than US cities and agencies are already eager to commit. These include Atlanta and Florida.
]]>One inevitable aspect of cities and urban life in general is that it is noisy, with traffic being one of the main sources of noise pollution. Finding a way to attenuate especially the low-frequency noise of road traffic was the subject of [Joe Krcma]’s Masters Thesis, the results of which he gave a talk on at the Portland Maker Meetup Club after graduating from University College London. The chosen solution in his thesis are Helmholtz resonators, which are a kind of acoustic spring. Using a carefully selected opening into the cavity, frequencies can be filtered out, and extinguished inside the cavity.
As examples of existing uses of Helmholtz resonators in London, he points at the Queen Elizabeth Hall music venue, as well as the newly opened Queen Elizabeth Line and Paddington Station. For indoor applications there are a number of commercial offerings, but could this be applied to outdoor ceramics as well, to render urban environments into something approaching an oasis of peace and quiet?
For the research, [Joe]’s group developed a number of Helmholtz resonator designs and manufacturing methods, with [Joe] focusing on clay fired versions. For manufacturing, 3D printing of the clay was attempted, which didn’t work out too well. This was followed by slip casting, which allowed for the casting of regular rectangular bricks.
But after issues with making casting hollow bricks work, as well as the cracking of the bricks during firing in the kiln, the work of another student in the group inspired [Joe] to try a different approach. The result was a very uniquely shaped ‘brick’ that, when assembled into a wall, forms three Helmholtz resonators: inside it, as well as two within the space with other bricks. During trials, the bricks showed similar sound-deadening performance as foam and wood. He also made the shape available on Thingiverse, if you want to try printing or casting it yourself.
]]>
This story originally published online at NC Newsline.
Five rooms in Poe Hall at NC State University are contaminated with PCBs at levels up to 38 times greater than EPA standards for building materials, according to sampling results obtained by Newsline this week under Public Records law.
The results are here, and Newsline has annotated them to explain what they mean.
University officials last month temporarily closed the seven-story building after initial tests showed the presence of PCBs, a probable carcinogen in various building materials, including in several air handling units. Poe Hall houses the College of Education and the Department of Psychology.
Raleigh’s WRAL first reported on the building’s closure.
Several people who worked in the building had reported black liquid was dripping onto their desks. An employee filed a complaint with the state Department of Labor in September, WRAL reported, but the university didn’t close the building until November 17.
Here is a list of the affected rooms and the levels of Aroclor 1262, a type of PCB, detected. In two cases, a second type of PCB was detected, Aroclor 1254, which is more toxic. Sampling was conducted in October and November.
An important number is 50 parts per million or greater for insulation and other solid materials: That’s the level at which the materials must be removed, according to the federal Toxic Substances Control Act.
For the swipe samples, the key figure is 10 parts per billion in a 4-inch-by-4-inch square. This threshold applies to non-porous surfaces, like metal tables, in “high occupancy areas,” such as classrooms and offices.
Room 520E, part of a suite of 11 faculty offices
Room 417, instructional computing facility
Room 310-P, faculty office
Fifth-floor women’s bathroom
Room 100
Room 730, faculty office
Room 732C, faculty office
Other air handling units, without designated rooms:
PCBs can be absorbed through the skin, water, air and food, especially fish. The EPA and FDA have set maximum contaminant limits for water and food, while OSHA governs exposure in the workplace, primarily in the air.
The level of risk depends not only on the amount of exposure but the length of time. A person who worked in a contaminated office 40 hours a week for 10 years would be at greater risk than someone who visited that same office once a week for a half hour.
Blood tests can indicate levels of exposure, according to federal health officials, but might not predict health effects.
Long-term effects include cancer of the liver and biliary tract (gallbladder, pancreas, bile ducts). Exposure can also suppress the immune system, according to the National Institutes of Health, as well as cause thyroid and reproductive disorders. Women are at high risk of giving birth to infants of low birth weight.
Short-term issues include acne or other skin lesions, high blood pressure and high cholesterol.
In a statement issued in late November, Warwick Arden, executive vice chancellor and provost, and Charles Maimone, executive vice chancellor, wrote that the university is hiring an outside consultant “to conduct more comprehensive environmental testing to help us better understand the environment in the building.
“Until we have this information, we cannot provide definitive guidance about what—if any—remediation or cleaning is needed or whether the findings are cause for concern from a health perspective. Please know that as soon as we have additional context and guidance, we will share it.”
From 1929 to 1977 Monsanto manufactured PCBs for use as coolants and insulating fluids for electrical equipment and machinery, such as transformers, capacitors, even ballasts in fluorescent lights. The EPA banned the manufacture of PCBs in 1979. However, because PCBs don’t break down in the environment, contamination can still be found in buildings that were constructed or renovated between 1950 and 1979. Poe Hall was built in 1971.
Because of their widespread use, PCBs could be present in 60% of building stock, according to Environmental Health and Engineering. This includes schools. The New York Times reported this week that a jury in Washington State determined that Monsanto should pay $857 million to former students and parent volunteers, who said they had been exposed to PCBs at Sky Valley Education Center and became sick.
Monsanto is appealing the verdict and what the company called in a prepared statement, “constitutionally excessive damages.”
PCBs are found at many Superfund sites nationwide, including the former Ward Transformer site near the Raleigh-Durham International Airport. Contamination from that site entered Lake Crabtree, Crabtree Creek and Brier Creek, prompting state officials to issue a fish consumption advisory.
Comment on this story at backtalk@indyweek.com.
Join the INDY Press Club to help us keep fearless watchdog reporting and essential arts and culture coverage viable in the Triangle.
The post New Test Results from NC State Building Show PCB Levels Up to 38 Times Higher Than EPA Standards appeared first on INDY Week.
]]>SAN FRANCISCO—A cartel of major publishing companies must not be allowed to criminalize fair-use library lending, the Internet Archive argued in an appellate brief filed today.
The Internet Archive is a San Francisco-based 501(c)(3) non-profit library which preserves and provides access to cultural artifacts of all kinds in electronic form. The brief filed in the U.S. Court of Appeal for the Second Circuit by the Electronic Frontier Foundation (EFF) and Morrison Foerster on the Archive’s behalf explains that the Archive’s Controlled Digital Lending (CDL) program is a lawful fair use that preserves traditional library lending in the digital world.
"Why should everyone care about this lawsuit? Because it is about preserving the integrity of our published record, where the great books of our past meet the demands of our digital future,” said Brewster Kahle, founder and digital librarian of the Internet Archive. “This is not merely an individual struggle; it is a collective endeavor for society and democracy struggling with our digital transition. We need secure access to the historical record. We need every tool that libraries have given us over the centuries to combat the manipulation and misinformation that has now become even easier.”
“This appeal underscores the role of libraries in supporting universal access to information—a right that transcends geographic location, socioeconomic status, disability, or any other barriers,” Kahle added. “Our digital lending program is not just about lending responsibly; it’s about strengthening democracy by creating informed global citizens."
Through CDL, the Internet Archive and other libraries make and lend out digital scans of print books in their collections, subject to strict technical controls. Each book loaned via CDL has already been bought and paid for, so authors and publishers have already been fully compensated for those books; in fact, concrete evidence shows that the Archive’s digital lending—which is limited to the Archive’s members—does not and will not harm the market for books.
Nonetheless, publishers Hachette, HarperCollins, Wiley, and Penguin Random House sued the Archive in 2020, claiming incorrectly that CDL violates their copyrights. A judge of the U.S. District Court for the Southern District of New York in March granted the plaintiffs’ motion for summary judgment, leading to this appeal.
The district court’s “rejection of IA’s fair use defense was wrongly premised on the supposition that controlled digital lending is equivalent to indiscriminately posting scanned books online,” the brief argues. “That error caused it to misapply each of the fair use factors, give improper weight to speculative claims of harm, and discount the tremendous public benefits controlled digital lending offers. Given those benefits and the lack of harm to rightsholders, allowing IA’s use would promote the creation and sharing of knowledge—core copyright purposes—far better than forbidding it.”
The brief explains how the Archive’s digital library has facilitated education, research, and scholarship in numerous ways. In 2019, for example, the Archive received federal funding to digitize and lend books about internment of Japanese Americans during World War II. In 2022, volunteer librarians curated a collection of books that have been banned by many school districts but are available through the Archive’s library. Teachers have used the Archive to provide students access to books for research that were not available locally. And the Archive’s digital library has made online resources like Wikipedia more reliable by allowing articles to link directly to the particular page in a book that supports an asserted fact and by allowing readers to immediately borrow the book to verify it.
For the brief: https://www.eff.org/document/internet-archive-opening-brief-us-court-appeals-second-circuit
For more on the case: https://www.eff.org/cases/hachette-v-internet-archive
For the Internet Archive's blog post: https://blog.archive.org/2023/12/15/internet-archive-defends-digital-rights-for-libraries/
Now, if you've heard anything about this, you've probably been told that Mickey isn't really entering the public domain. Between trademark claims and later copyrightable elements of Mickey's design, Mickey's status will be too complex to understand. That's totally wrong. [...]
The copyrightable status of a character used to be vague and complex, but several high-profile cases have brought clarity to the question. The big one is Les Klinger's case against the Arthur Conan Doyle estate over Sherlock Holmes. That case established that when a character appears in both public domain and copyrighted works, the character is in the public domain, and you are "free to copy story elements from the public domain works": [...]
Despite what you might have heard, there is no ambiguity here. Copyrights can't be extended through trademark. Period. Unanimous Supreme Court Decision. Boom. End of story. Done.
Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.
]]>
Over the past few months we've migrated all of the vger.kernel.org mailing lists, with the exception of the Big One (linux-kernel, aka LKML). This list alone is responsible for about 80% of all vger mailing list traffic, so we left it for the last.]]>This Thursday, December 14, at 11AM Pacific (19:00 UTC), we will switch the MX record for vger to point to the new location (subspace.kernel.org), which will complete the mailing list migration from the legacy vger server to the new infrastructure.
[Raymond Chen] wondered why the x86 ENTER instruction had a strange second parameter that seems to always be set to zero. If you’ve ever wondered, [Raymond] explains what he learned in a recent blog post.
If you’ve ever taken apart the output of a C compiler or written assembly programs, you probably know that ENTER is supposed to set up a new stack frame. Presumably, you are in a subroutine, and some arguments were pushed on the stack for you. The instruction puts the pointer to those arguments in EBP and then adjusts the stack pointer to account for your local variables. That local variable size is the first argument to ENTER.
The reason you rarely see it set to a non-zero value is that the final argument is made for other languages that are not as frequently seen these days. In a simple way of thinking, C functions live at a global scope. Sure, there are namespaces and methods for classes and instances. But you don’t normally have a C compiler that allows a function to define another function, right?
Turns out, gcc does support this as an extension (but not g++). However, looking at the output code shows it doesn’t use this feature, but it could. The idea is that a nested function can “see” any local variables that belong to the enclosing function. This works, for example, if you allow gcc to use its extensions:
#include <stdio.h> void test() { int a=10; /* nested function */ void testloop(int n) { int x=a; while(n--) printf("%d\n",x); } testloop(3); printf("Again\n"); testloop(2); printf("and now\n"); a=33; testloop(5); } void main(int argc, char*argv[]) { test(); }
You can see that the testloop function has access to its argument, a local variable, and also a local variable that belongs to the test function. We aren’t saying this is a good idea, but it is possible, and it is common in certain other languages like Pascal, for example.
In some cases, this situation is handled by providing a linked list of stack frames. However, the Intel designers decided to do it differently. When you provide a non-zero second argument to ENTER, it copies an array of stack pointers into your local variable space. This makes your code potentially more efficient as it executes but exacts a penalty on function calls for nested functions.
As [Raymond] points out, though, it may be that no one uses this feature. Certainly, gcc doesn’t. If you want to make sure, try these commands with the above program in nest.c to check out 32-bit x86:
gcc -m32 -g -o nest nest.c gcc -m32 -s -c nest.c# now look at nest.s and/or disassemble nest using gdb
Of course, if you write your own assembly, you could use the feature as you see fit. The x86 has some crazy instructions. If you’ve ever wondered if you should learn assembly language, our commenters would like a word with you.
]]>The world's economic output would be substantially higher (5%?) if our industry had settled on almost anything other than SQL for relational databases.
It disparages object-relational mappers and suggests Datalog as an alternative query language.
]]>