Apr 11 2025

Bury Broadband and Electricity

We may have a unique opportunity to make an infrastructure investment that can demonstrably save money over the long term – by burying power and broadband lines. This is always an option, of course, but since we are in the early phases of rolling out fiber optic service, and also trying to improve our grid infrastructure with reconductoring, now may be the perfect time to also upgrade our infrastructure by burying much of these lines.

This has long been a frustration of mine. I remember over 40 years ago seeing new housing developments (my father was in construction) with all the power lines buried. I hadn’t realized what a terrible eye sore all those telephone poles and wires were until they were gone. It was beautiful. I was lead to believe this was the new trend, especially for residential areas. I looked forward to a day without the ubiquitous telephone poles, much like the transition to cable eliminated the awful TV antennae on top of every home. But that day never came. Areas with buried lines remained, it seems, a privilege of upscale neighborhoods. I get further annoyed every time there is a power outage in my area because of a downed line.

The reason, ultimately, had to be cost. Sure, there are lots of variables that determine that cost, but at the end of the day developers, towns, utility companies were taking the cheaper option. But what price do we place on the aesthetics of the places we live, and the inconvenience of regular power outages? I also hate the fact that the utility companies have to come around every year or so and carve ugly paths through large beautiful trees.

Continue Reading »

Comments: 0

Apr 08 2025

De-extincting the Dire Wolf

This really is just a coincidence – I posted yesterday about using AI and modern genetic engineering technology, with one application being the de-extinction of species. I had not seen the news from yesterday about a company that just announced it has cloned three dire wolves from ancient DNA. This is all over the news, so here is a quick recap before we discuss the implications.

The company, Colossal Biosciences, has long announced its plans to de-extinct the woolly mammoth. This was the company that recently announced it had made a woolly mouse by inserting a gene for wooliness from recovered woolly mammoth DNA. This was a proof-of-concept demonstration. But now they say they have also been working on the dire wolf, a species of wolf closely related to the modern gray wolf that went extinct 13,000 years ago. We mostly know about them from skeletons found in the Le Brea tar pits (some of which are on display at my local Peabody Museum). Dire wolves are about 20% bigger than gray wolves, have thicker lighter coats, and are more muscular. They are the bad-ass ice-age version of wolves that coexisted with saber-toothed tigers and woolly mammoths.

The company was able to recover DNA from 13,000 year old tooth and a 72,000 year old skull. With that DNA they engineered wolf DNA at 20 sites over 14 genes, then used that DNA to fertilize an egg which they gestated in a dog. They actually did this twice, the first time creating two males, Romulus and Remus (now six months old), and the second time making one female, Kaleesi (now three months old). The wolves are kept in a reserve. The company says they have no current plan to breed them, but do plan to make more in order to create a full pack to study pack behavior.

Continue Reading »

Comments: 0

Apr 07 2025

Will AI Bring Us Jurassic Park

I think it’s increasingly difficult to argue that the recent boom in artificial intelligence (AI) is mostly hype. There is a lot of hype, but don’t let that distract you from the real progress. The best indication of this is applications in scientific research, because the outcomes are measurable and objective. AI applications are particularly adept at finding patterns in vast sets of data, finding patterns in hours that might have required months of traditional research. We recently discussed on the SGU using AI to sequence proteins, which is the direction that researchers are going in. Compared to the traditional method using AI analysis is faster and better at identifying novel proteins (not already in the database).

One SGU listener asked an interesting question after our discussion of AI and protein sequencing that I wanted to explore – can we apply the same approach to DNA and can this result in reverse-engineering the genetic sequence from the desired traits? AI is already transforming genetic research. AI apps allow for faster, cheaper, and more accurate DNA sequencing, while also allowing for the identification of gene variants that correlate with a disease or a trait. Genetics is in the sweet spot for these AI applications – using large databases to find meaningful patterns. How far will this tech go, and how quickly.

We have already sequenced the DNA of over 3,000 species. This number is increasing quickly, accelerated by AI sequencing techniques. We also have a lot of data about gene sequences and the resulting proteins, non-coding regulatory DNA, gene variants and disease states, and developmental biology. If we trained an AI on all this data, could it then make predictions about the effects of novel gene variants? Could it also go from a desired morphological trait back to the genetic sequence that would produce that trait? Again, this sounds like the perfect application for AI.

Continue Reading »

Comments: 0

Apr 04 2025

Is Planned Obsolescence Real

Yes – it is well-documented that in many industries the design of products incorporates a plan for when the product will need to be replaced. A blatant example was in 1924 when an international meeting of lightbulb manufacturers decided to limit the lifespan of lightbulbs to 1,000 hours, so that consumers would have to constantly replace them. This artificial limitation did not end until CFLs and then LED lightbulbs largely replaced incandescent bulbs.

But – it’s more complicated than you might think (it always is). Planned obsolescence is not always about gimping products so they break faster. It often is – products are made so they are difficult to repair or upgrade and arbitrary fashions change specifically to create demand for new versions. But often there is a rational decision to limit product quality. Some products, like kids’ clothes, have a short use timeline, so consumers prefer cheap to durable. There is also a very good (for the consumer) example of true obsolescence – sometimes the technology simply advances, offering better products. Durability is not the only nor the primary attribute determining the quality of a product, and it makes no sense to build in expensive durability for a product that consumers will want to replace. So there is a complex dynamic among various product features, with durability being only one feature.

We can also ask the question, for any product or class of products, is durability actually decreasing over time? Consumers are now on the alert for planned obsolescence, and this may produce the confirmation bias of seeing it everywhere, even when it’s not true. A recent study looking at big-ticket appliances shows how complex this question can be. This is a Norwegian study looking at the lifespan of large appliances over decades, starting in the 1950s.

First, they found that for most large appliances, there was no decrease in lifespan over this time period. So the phenomenon simply did not exist for the items that homeowning consumers care the most about, their expensive appliances. There were two exceptions, however – ovens and washing machines. Each has its own explanations.

Continue Reading »

Comments: 0

Apr 03 2025

The Transition to Agriculture

It is generally accepted that the transition from hunter-gatherer communities to agriculture was the single most important event in human history, ultimately giving rise to all of civilization. The transition started to take place around 12,000 years ago in the Middle East, China, and Mesoamerica, leading to the domestication of plants and animals, a stable food supply, permanent settlements, and the ability to support people not engaged full time in food production. But why, exactly, did this transition occur when and where it did?

Existing theories focus on external factors. The changing climate lead to fertile areas of land with lots of rainfall, at the same time food sources for hunting and gathering were scarce. This occurred at the end of the last glacial period. This climate also favored the thriving of cereals, providing lots of raw material for domestication. There was therefore the opportunity and the drive to find another reliable food source. There also, however, needs to be the means. Humanity at that time had the requisite technology to begin farming, and agricultural technology advanced steadily.

A new study looks at another aspect of the rise of agriculture, demographic interactions. How were these new agricultural communities interacting with hunter-gather communities, and with each other? The study is mainly about developing and testing an inferential model to look at these questions. Here is a quick summary from the paper:

“We illustrate the opportunities offered by this approach by investigating three archaeological case studies on the diffusion of farming, shedding light on the role played by population growth rates, cultural assimilation, and competition in shaping the demographic trajectories during the transition to agriculture.”

In part the transition to agriculture occurred through increased population growth of agricultural communities, and cultural assimilation of hunter-gatherer groups who were competing for the same physical space. Mostly they were validating the model by looking at test cases to see if the model matched empirical data, which apparently it does.

Continue Reading »

Comments: 0

Mar 31 2025

The Politicians We Deserve

This is an interesting concept, with an interesting history, and I have heard it quoted many times recently – “we get the politicians (or government) we deserve.” It is often invoked to imply that voters are responsible for the malfeasance or general failings of their elected officials. First let’s explore if this is true or not, and then what we can do to get better representatives.

The quote itself originated with Joseph de Maistre who said, “Every nation gets the government it deserves.” (Toute nation a le gouvernement qu’elle mérite.) Maistre was a counter-revolutionary. He believed in divine monarchy as the best way to instill order, and felt that philosophy, reason, and the enlightenment were counterproductive. Not a great source, in my opinion. But apparently Thomas Jefferson also made a similar statement, “The government you elect is the government you deserve.”

Pithy phrases may capture some essential truth, but reality is often more complicated. I think the sentiment is partly true, but also can be misused. What is true is that in a democracy each citizen has a civic responsibility to cast informed votes. No one is responsible for our vote other than ourselves, and if we vote for bad people (however you wish to define that) then we have some level of responsibility for having bad government. In the US we still have fair elections. The evidence pretty overwhelmingly shows that there is no significant voter fraud or systematic fraud stealing elections.

This does not mean, however, that there aren’t systemic effects that influence voter behavior or limit our representation. This is a huge topic, but just to list a few examples – gerrymandering is a way for political parties to choose their voters, rather than voters choosing their representatives, the electoral college means that for president some votes have more power than others, and primary elections tend to produce more radical options. Further, the power of voters depends on getting accurate information, which means that mass media has a lot of power. Lying and distorting information deprives voters of their ability to use their vote to get what they want and hold government accountable.

Continue Reading »

Comments: 0

Mar 28 2025

H&M Will Use Digital Twins

The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds.

Regarding the H&M announcement specifically, they said they will use digital twins of models that have already modeled for them, and only with their explicit permission, while the models retain full ownership of their image and brand. They will also be compensated for their use. On social media platforms the use of AI-generated imagery will carry a watermark (often required) indicating that the images are AI-generated.

It seems clear that H&M is dipping their toe into this pool, doing everything they can to address any possible criticism. They will get explicit permission, compensate models, and watermark their ads. But of course, this has not shielded them from criticism. According to the BBC:

American influencer Morgan Riddle called H&M’s move “shameful” in a post on her Instagram stories.

“RIP to all the other jobs on shoot sets that this will take away,” she posted.

This is an interesting topic for discussion, so here’s my two-cents. I am generally not compelled by arguments about losing existing jobs. I know this can come off as callous, as it’s not my job on the line, but there is a bigger issue here. Technological advancement generally leads to “creative destruction” in the marketplace. Obsolete jobs are lost, and new jobs are created. We should not hold back progress in order to preserve obsolete jobs.

Continue Reading »

Comments: 0

Mar 27 2025

The 80-20 Rule

From the Topic Suggestions (Lal Mclennan):

What is the 80/20 theory portrayed in Netflix’s Adolescence?

The 80/20 rule was first posed as a Pareto principle that suggests that approximately 80 per cent of outcomes stem from just 20 per cent of causes. This concept takes its name from Vilfredo Pareto, an Italian economist who noted in 1906 that a mere 20 per cent of Italy’s population owned 80 per cent of the land.
Despite its noble roots, the theory has since been misappropriated by incels.
In these toxic communities, they posit that 80 per cent of women are attracted to only the top 20 per cent of men. https://www.mamamia.com.au/adolescence-netflix-what-is-80-20-theory/

As I like to say, “It’s more of a guideline than a rule.” Actually, I wouldn’t even say that. I think this is just another example of humans imposing simplistic patterns of complex reality. Once you create such a “rule” you can see it in many places, but that is just confirmation bias. I have encountered many similar “rules” (more in the context of a rule of thumb). For example, in medicine we have the “rule of thirds”. Whenever asked a question with three plausible outcomes, a reasonable guess is that each occurs a third of the time. The disease is controlled without medicine one third of the time, with medicine one third, and not controlled one third, etc. No one thinks there is any reality to this – it’s just a trick for guessing when you don’t know the answer. It is, however, often close to the truth, so it’s a good strategy. This is partly because we tend to round off specific numbers to simple fractions, so anything close to 33% can be mentally rounded to roughly a third. This is more akin to a mentalist’s trick than a rule of the universe.

Continue Reading »

Comments: 0

Mar 24 2025

How To Keep AIs From Lying

We had a fascinating discussion on this week’s SGU that I wanted to bring here – the subject of artificial intelligence programs (AI), specifically large language models (LLMs), lying. The starting point for the discussion was this study, which looked at punishing LLMs as a method of inhibiting their lying. What fascinated me the most is the potential analogy to neuroscience – are these LLMs behaving like people?

LLMs use neural networks (specifically a transformer model) which mimic to some extent the logic of information processing used in mammalian brains. The important bit is that they can be trained, with the network adjusting to the training data in order to achieve some preset goal. LLMs are generally trained on massive sets of data (such as the internet), and are quite good at mimicking human language, and even works of art, sound, and video. But anyone with any experience using this latest crop of AI has experienced AI “hallucinations”. In short – LLMs can make stuff up. This is a significant problem and limits their reliability.

There is also a related problem. Hallucinations result from the LLM finding patterns, and some patterns are illusory. The LLM essentially makes the incorrect inference from limited data. This is the AI version of an optical illusion. They had a reason in the training data for thinking their false claim was true, but it isn’t. (I am using terms like “thinking” here metaphorically, so don’t take it too literally. These LLMs are not sentient.) But sometimes LLMs don’t inadvertently hallucinate, they deliberately lie. It’s hard not to keep using these metaphors, but what I mean is that the LLM was not fooled by inferential information, it created a false claim as a way to achieve its goal. Why would it do this?

Well, one method of training is to reward the LLM when it gets the right answer. This reward can be provided by a human – checking a box when the LLM gives a correct answer. But this can be time consuming, so they have build self-rewarding language models. Essentially you have a separate algorithm which assessed the output and reward the desired outcome. So, in essence, the goal of the LLM is not to produce the correct answer, but to get the reward. So if you tell the LLM to solve a particular problem, it may find (by exploring the potential solution space) that the most efficient way to obtain the reward is to lie – to say it has solved the problem when it has not. How do we keep it from doing this.

Continue Reading »

Comments: 0

Mar 21 2025

The Neuroscience of Constructed Languages

Language is an interesting neurological function to study. No animal other than humans has such a highly developed dedicated language processing area, or languages as complex and nuanced as humans. Although, whale language is more complex than we previously thought, but still not (we don’t think) at human level. To better understand how human language works, researchers want to understand what types of communication the brain processes like language. What this means operationally, is that the processing happens in the language centers of the brain – the dominant (mostly left) lateral cortex comprising parts of the frontal, parietal, and temporal lobes. We have lots of fancy tools, like functional MRI scanning (fMRI) to see which parts of the brain are active during specific tasks, so researchers are able to answer this question.

For example, math and computer languages are similar to languages (we even call them languages), but prior research has shown that when coders are working in a computer language with which they are well versed, their language centers do not light up. Rather, the parts of the brain involved in complex cognitive tasks is involved. The brain does not treat a computer language like a language. But what are the critical components of this difference? Also, the brain does not treat non-verbal gestures as language, nor singing as language.

A recent study tries to address that question, looking at constructed languages (conlangs). These include a number of languages that were completely constructed by a single person fairly recently. The oldest of the languages they tested was Esperanto, created by L. L. Zamenhof in 1887 to be an international language. Today there are about 60,000 Esperanto speakers. Esperanto is actually a hybrid conlang, meaning that it is partly derived from existing languages. Most of its syntax and structure is taken from Indo-European languages, and 80% of its vocabulary is taken from Romance languages. But is also has some fabricated aspects, mostly to simplify the grammar.

Continue Reading »

Comments: 0

« Prev - Next »