Published April 30, 2026 by Alex
I'm going to be writing about a few topics today, the Playstation 30-day DRM check, the shutdown of Tripod and Angelfire, Deepseek V4, and the Lisuan LX7G100 GPU announcement. The video version of this article will be avaliable on my YouTube channel and sources will be listed throughout the article, but with that out of the way, let's get into it.
This is from Tom's Hardware,
"Sony rolls out 30-day online DRM check-in for PlayStation digital games. Players could temporarily lose access if they don't keep their consoles online. A few days ago, reports of a new DRM policy surrounding PS4 and PS5 consoles began popping up. Many users are seeing a new 30-day online check-in requirement for some games. In the info page of an affected game, you'd see a new validity period and a "remaining time" deadline. At first, this seemed like a software bug, but now PlayStation Support has confirmed its authenticity to multiple users. PlayStation owners are furious about the change. From what we've seen, this DRM is intended for digital game copies. It works by instating a mandatory online check-in where you have to connect to the internet within a rolling 30-day window or risk losing access to the game. Afterward, you can still restore access, but you'll need an internet connection to renew the game's license first. So far, it seems like only games installed after the recent March firmware update are affected." And "Setting the PS4 and PS5 as a primary console does not alleviate the policy, and any game you download from now on will feature this new requirement, effectively eliminating the concept of online play for even singleplayer titles."
Source: Hassam Nasir, "Sony rolls out 30-day online DRM check-in for PlayStation digital games — players could temporarily lose access if they don't keep their consoles online" Tom's Hardware, April 28.
This is just another in a long line of stories like this, I remember the one about Amazon removing movies that someone “bought” on Prime Video. Of course, I put “bought” in quotes here because when you buy something from a platform like this with DRM you really haven’t bought anything, but rather are being granted a license to access the content from the service which can be revoked at any time, or the terms can be altered at any time without warning. Stories like this one are why I'm a big proponent of physical media. You can still get the PS5 with an optical drive, same is true of the PS4, And you can get physical copies of the games. But in a lot of cases now, there are games that are digital only, and on PC, there are games that are only on Steam. You have GOG, and you have other ways that you can get games without DRM, but it's not perfect. The unfortunate thing is that the selling point for Steam was that it's easier than piracy. It's more convenient than piracy. And that's largely true and, so, because it's more convenient, you have people who are just using that and very few people care about the presence of DRM. And the same is true of the other marketplaces, Epic, Uplay, the others. The good thing is that it seems like people are not happy about what Sony is doing here and there's some backlash, so hopefully Sony reverses it, but honestly I don't expect that to happen. From what I have seen this type of mandatory check-in has already been the case on Xbox. I haven’t used a console in a while but, hopefully this kind of stuff makes people start to think about DRM and consider trying to own physical copies of your games when possible, because there’s plenty of good reasons to. If you don't want to play it anymore, you finished the game, you can always resell it, you can't do that with a digital copy, it can't be removed, or they can't go back in and mess with it, same is true of shows, movies. There have been some instances of streaming services removing certain episodes, or altering them, I think it was Seinfeld they altered the aspect ratio to make it widescreen and just cut off the top and bottom, They removed some episodes of The Office that were considered to be too offensive now from whatever streaming service that's on. Really, DRM is really not something that people should be tolerating in my opinion. I try to avoid it as much as possible and hopefully this results in more people being aware of that and trying to fight back against it.
Alright, next story. Two webhosting services, Angelfire and Tripod which were both owned by Lycos, appear to be disappearing from the internet.
This is from the ArchiveTeam Wiki,
On March sixth of this year, the following announcement appeared on the homepage of Lycos which implied that it would be shut down by the fifth of April, “To our users of Angelfire and Tripod, we apologize for this service interruption. Unfortunately we will be shutting down in the next thirty days, please move your hosting to another host as soon as possible." While the above message was removed from Lycos' homepage shortly after, it could still be found in the source code on the third of April of this year, the announcement reappeared and user-generated websites of Tripod remained accessible for the following weeks, unlike those of Angelfire which gave 403 or 502 errors, the latter of which was also possible for Tripod, but did not happen generally. On the twenty fourth of April, seven weeks after the original announcement, Tripod ceased to be accessible, by the twenty seventh of April, another message appeared on the Lycos homepage. "Lycos services are experiencing a temporary outage. To our loyal users of Lycos Mail, Tripod and Angelfire, we apologize and appreciate your patience while we work to restore service as soon as possible."
Source: "Tripod" ArchiveTeam Wiki.
If you're not too familiar with these two services, from what I understand, they both had been around for thirty years, they were much bigger in the early days of the internet, and, similar to Geocities, hosted a lot of personal sites, small blogs and things from back then. And though these services removed “inactive” accounts after some period of time, a lot of hosting services from this long ago have sites that served as a sort of time capsule from the early days of the internet. And though it’s a little before my time, it’s really unfortunate to see stuff like this go away because, as we know, Geocities only had one out of eight terabytes archived, MySpace lost all content uploaded before 2016 during a failed server migration in 2019, that was over two hundred terabytes of data. It looks like in this case, ArchiveTeam is working to try to recover some of the sites that were hosted on Tripod and Angelfire, which is good to see, but it’s always sad to have stuff like this happen, just another reminder that the days of the old internet are largely gone. We have Neocities and Wiby, but it’s not really the same, is it? At least that’s what I’ve heard, again I wasn’t really there. But as someone interested in technology, I’ve got to have an appreciation for stuff like this regardless.
Next up, for some better news, DeepSeek V4 is available in preview. This is from The Register,
“DeepSeek's new models are so efficient they'll run on a toaster, by which we mean Huawei's NPUs. Chinese AI darling DeepSeek is back with a new open weights large language model that promises performance to rival the best proprietary American LLMs. Perhaps more importantly, it claims to dramatically reduce inference costs and it extends support for Huawei's Ascend family of AI accelerators. Unveiled on Friday, DeepSeek V4 is available for download on popular model repos like Hugging Face, the company's API, and web service in two new flavors. The first is a smaller 284 billion parameter Flash mixture-of-experts (MoE) model with 13 billion active parameters, while the larger of the two is a 1.6 trillion parameter model, 49 billion of which are in use at any given moment. V4-Pro was trained on 33 trillion tokens and, if DeepSeek is to be believed, beats out every open weight LLM while rivaling the West's best proprietary models across its benchmark suite. Of course, these claims should be taken with a grain of salt. While DeepSeek has had a strong track record with its V3 and R1 family of models that made the Chinese dev a household name, just because it performs well in canned benchmarks doesn't mean it'll hold up in real world applications. We would expect DeepSeek V4-Pro to be much better than the company's prior efforts. The new model is nearly a trillion parameters larger and uses more active parameters during inference. But as was the case with DeepSeek V3, which showed that large frontier models could be trained using less compute than previously thought, benchmarks don't tell the full story. Under the hood, DeepSeek V4 introduces several novel architectural changes that, according to developers, should make the model much less expensive to serve.”
Source: Tobias Mann, " DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs" The Register, April 24.
There’s a whole explanation of the specific optimizations that were made that I’m not gonna get into here, but what I will say is that when DeepSeek R1 came out, it was a big shock to the AI space, it tanked the stock market, and the particularly interesting thing about DeepSeek is that its models are open source. They just put them out for download on Hugging Face. There were some allegations particularly from OpenAI that it had basically extracted results from western AI models to train itself, and I will say that I personally have used DeepSeek and in the past, just asking it what it is, it would sometimes say that it was made by OpenAI and more recently it would say that it’s some sort of Claude model, but maybe that’s just hallucinations? I don’t work there so I can’t be sure, not making any claims either way. But regardless of the effects to western AI companies, to consumers this is likely to be a good thing, for one, according to this article here,
“The company is unsurprisingly offering API access to the smaller model at a reduced rate of $0.14 per million input tokens (uncached) and $0.28 per million output tokens. The larger Pro model is much more expensive at $1.74 per million input tokens and $3.48 per million output tokens, but that's still a fraction of what Western AI vendors are charging for access to their top models. For reference, OpenAI charges $5 per million input tokens and $30 per million output tokens for GPT-5.5.”
Source: Tobias Mann, " DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs" The Register, April 24.
Maybe the other companies will be lowering their prices to compete, we’ll have to see. In addition, when you look at Google TurboQuant from last year, and now this, if this is a step towards meaningfully reducing inference costs across the board, then that could lead to these AI models being able to perform the same tasks with fewer hardware resources which could lead to, both reduced prices for GPUs and memory maybe? We’re already starting to see just a little bit of that, from what I’ve heard, prices going down just a bit, and only time will tell if this and the resulting effects serve to put a dent in these inflated prices that are hurting the prospects for consumers trying to get their hands on memory and decent GPUs. This could also lower the barrier of entry for local LLM usecases which is particularly interesting to me, DeepSeek has had the Qwen quantized models in the past which were at smaller sizes and that could continue to be a good option here for lots of people. We’re still waiting for the bubble to pop.
Last up, this is from VideoCardz.com,
“Lisuan LX 7G100 gaming GPU to launch in June with support for 100+ games. Lisuan Technology reiterates its plans to launch its LX 7G100 consumer GPU during China’s 618 shopping festival. The company claims the card has already been adapted for more than 100 games. The report lists support for mainstream AAA games and popular online titles. Demoed or mentioned titles include Black Myth: Wukong, Elden Ring, Cyberpunk 2077, Naraka: Bladepoint and Red Dead Redemption 2.” “Lisuan CEO Xuan Yifang also gave a more cautious view of the card’s current status. He said the first-generation GPU is ahead of GeForce RTX 4060 in OpenCL workloads, but still needs work in real game frame rates and software compatibility.”
Source: WhyCry, "Lisuan LX 7G100 gaming GPU to launch in June with support for 100+ games" VideoCardz.com, April 24.
If you don’t know, Lisuan is a Chinese company which was founded by former employees of S3 Graphics, and this isn’t the first time this company has been in the news. But of course, because of this AI arms race that is going on, the Chinese government is putting significant funding into developing AI accelerator GPUs, like for example the Huawei Ascend series which were used during training of DeepSeek V4, both for obvious national security reasons as well as because Western products like the Nvidia Blackwell GPUs are subject to export controls, so it’s difficult for Chinese companies to get them. But this funding seems to have not only involved AI, it appears to be, now, resulting in potential new offerings for gaming GPUs as well. From what it says here, this appears to be around the performance of an RTX 4060, which is not top tier, 12GB GDDR6 memory which is pretty good, 225 watts board power through a single 8-pin connector. But there has been steady progress in this direction from a few different companies, Moore Threads is another one, and it would be nice to see new options in the entry level gaming GPU segment at more competitive prices, provided they are actually allowed to be sold here in the US and aren’t subject to significant tariffs or anything like that. The only other thing I would be concerned about would be long term support of products like this, stability, actually working with a large number of tasks and games, because that was a big issue with Intel’s Arc when that launched initially, but it got better over time. Of course, for AI workloads, Nvidia’s CUDA is still the dominant platform and every other player in the space has faced a lot of difficulty with adoption for that reason. Maybe multiple companies could work together to develop something to rival that eventually but that’s gonna be well off in the future, so we’re just going to have to wait and see with this one as well.
Made with love by Alex from Tech Temper
Have questions? Comments? Suggestions for my website? Contact me at gdp770@proton.me