Fedora People

Fedora Ops Architect Weekly

Posted by Fedora Community Blog on April 05, 2024 10:16 PM

Hello folks! I’m coming to you live from a very wet and windy Ireland. April showers is certainly a thing, but this kind of rain is even giving the typical Irish weather a run for its money! 😀 I hope you all have had a good (and drier) week so far and enjoy your weekend, and if you would like some weekend reading, weekend bug fixing, or even a weekend proposal writing session, read on for the links and information you need to do just that 🙂

Flock to Fedora

The call for papers for our annual contributor conference, Flock to Fedora, is now open until April 21st. Check out the cfp page for details on the tracks and themes of this years conference, plus information on travel subsidies and how to contact event staff if you need help.

Fedora Linux 40

Important Dates

  • Currently in Final Freeze
  • Thursday, April 11th @ 1700 UTC – Fedora Linux 40 Go/No-Go Meeting
  • Tuesday, April 16th – Current Release Target Date (this will depend on the outcome of the Go/No-Go meeting)

Help Wanted

There are a number of blocker bugs open against F40 at the moment, both proposed and accepted. If you could spare some time to visit the blocker bugs app and reproduce some of the bugs to validate if they are blocking bugs or not, and/or even propose a fix for a bug listed, that would be hugely appreciated. A summary of the current F40 bugs can be found on this email with links to each too.

Fedora Linux 41

Change proposals are welcome for F41, and even F42 (and F43 if you’re that prepared!). The first deadline is 19th June if your change requires any infrastructure changes, and 26th June if it is a system-wide change. Self-contained changes may be submitted until 16th July. Those dates might seem far off, but please do have your changes in Rawhide as early as possible as this impacts a lot of the build and release folks (QA, rel-eng, etc) so getting the work proposed, approved and into development as early as possible is strongly recommended. Below is a list of changes proposed, awaiting FESCo decision and already accepted for F41.

Proposed

Awaiting FESCo Decision

Accepted F41

Hot Topics

An update on the Git Forge Evaluation has been published by the Fedora Council. Please have a read on discussion.fpo or on the community blog.

The CommOps Team is rebooting! Read about the newly (re)formed team on their blog post and find out how to get involved and join the team.

Help Wanted

Lots of Test Days! Check them out on the QA calendar in fedocal for component-specific days. Help is always greatly appreciated.We also have some packages needing some new maintainers and others needing reviews. See below links to adopt and review packages!

The post Fedora Ops Architect Weekly appeared first on Fedora Community Blog.

KDE: “Run a command” on notification trigger

Posted by Andreas Schneider on April 05, 2024 11:22 AM

KDE had a feature a lot of people didn’t know about. You could run a command when a notification triggered. The feature wasn’t very well documented and nobody really blogged about it.

However with the release of KDE Plasma 6, the feature was removed. I learned about it by accident, as it is tracked by Bug #481069. I really need the feature and I re-implemented it in KDE Plasma 6. I will be available in KDE Plasma 6.1 and KDE Frameworks 6.1.

<figure class="aligncenter size-full">KDE: Run a command<figcaption class="wp-element-caption">KDE: Run a command</figcaption></figure>

Text-to-Speech for calendar events

I’m using the “Run a command” feature for calendar events. Normally you get a popup notification. The popup notification is small and pop up where all of them are shown. When I’m concentrated and working on some code I simply miss them. If I play a game, I miss them.

The solution for me is to use a Text to Speech (TTS) Engine. I’ve setup speech-dispatcher with piper-tts on my system. When an a reminder triggers it says: “Andreas, you have an appointment in 10 minutes: Samba Meeting”.

You can find the python script I use here.

Endless possibilities

The opportunities endless. Here are some ideas what else you could do:

  • Start your meeting/conferencing application prior to the meeting
  • Change the desktop activity before a meeting
  • Lock the screen if a specific WiFi gets disconnected

If you have some nice idea or already used it in the past, leave a comment to let me know what else you can do.

2024 Git Forge Evaluation

Posted by Fedora Community Blog on April 05, 2024 09:00 AM

Vol. I – Fedora Council 2024 Hackfest

During the Council’s February 2024 hackfest, we discussed the future of Fedora’s git forge – that is, the platform Fedora uses for version control and tracking for packages, source code, documentation, and more. This topic has been around for quite some time. If you are just coming into this conversation, or would like a refresher, #git-forge-future is a good place to start.

Instead of one huge post, the Fedora Council divided the follow-ups from our hack-fest into a mini-series of posts throughout April that will cover all the topics we discussed and made decisions on. In each post, we will walk through one core topic, and share our discussion and thought process on how we reached our outcomes. The first in this series, because why not start strong 🙂 , is an update on our git forge evaluation. Read on for important information.

The Council arrived at two main decisions during this discussion. 

Pagure

First, the Council does not see Pagure as a viable git forge solution for Fedora’s future. Instead, we will investigate other git forge options which meet our core community values: Freedom, Features, Friends, First. When a suitable solution is found, the work needed to migrate to the new git forge will be shared. 

At a later date, the Council will announce a sunsetting date for Pagure, with ample time for projects to migrate to the replacement.

Options for an alternate git forge

Second, the Council examined a long list of possibilities, and eliminated those that do not fit. We narrowed down the list to these options we think might meet the needs and spirit of Fedora: 

  1. GitLab Community Edition
  2. Forgejo (a fork of Gitea)

In both cases, the Council determined that the project will need to run the software in Fedora Infrastructure. Fedora Infrastructure previously investigated hosting possibilities from GitLab at length, and could not find something workable without compromising on our community values for software freedom.

The Council is grateful to everything the Pagure developers have done for us, and acknowledge Pagure’s immense positive impact on Fedora. In the end, these other two options were what the Council felt we could honestly ask our community to use. 

The Community Platform Engineering (CPE) Team is a Red Hat-sponsored team that supports Fedora Infrastructure and Release Engineering with staffing, efforts, and resources. The Council will ask the Red Hat CPE to lead the maintenance efforts alongside the community. Therefore, the Council encourages the community to collaborate and support the Red Hat CPE in an in-depth technical evaluation for both options.

When these investigations are complete, the project will have at least two weeks of community discussion on the reports. Then, the Council will select an option and will launch a Community Initiative implementing the migration plan.

Share your feedback on git forge future

To keep track of feedback and conversations in one place, direct all feedback and comments to the #git-forge-future tag on Fedora Discussion. You can reply to an existing topic or start a new one.

This will be a long journey for us to take together as a community. Thank you for your patience and feedback as we go down this road together. Please remember to keep your feedback courteous, respectful, and aligned with the Fedora Code of Conduct.

The post 2024 Git Forge Evaluation appeared first on Fedora Community Blog.

Infra and RelEng Update – Week 14 2024

Posted by Fedora Community Blog on April 05, 2024 07:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 01 April – 05 April 2024

<figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/04/Weekly-Report-Template22.jpg", "imageCurrentSrc": "", "targetWidth": "8030", "targetHeight": "6101", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">Infra and Releng infographic </figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra and RelEng Update – Week 14 2024 appeared first on Fedora Community Blog.

Untitled Post

Posted by Zach Oglesby on April 04, 2024 11:41 PM

The worst part about going to Japan is having to leave. It will always be my favorite country and I look forward to the next time.

Sharing a monitor between Linux & Mac

Posted by Christof Damian on April 04, 2024 04:54 PM

Desk with two monitors and laptop
For my new job, I (annoyingly) have to use a silly MacBook. For everything else, I have a nice, beautiful desktop running Fedora.

I looked into KVMs to share my monitor and keyboard between the two computers, but couldn't really find something reasonably priced and functional. 

Synergy/Barrier/InputLeap for keyboard sharing

I have used Synergy before to share keyboard and mouse between Linux computers, and this was already a good step. There is a fork for Synergy on Linux called Barrier, which now has been forked again to InputLeap. It also allows copy & paste between systems.

This brought me half to where I wanted to be, but I was still restricted to the tiny laptop screen on the Mac. 

DDC monitor input source switching

Both of my monitors are connected via DisplayPort to my desktop. I now connected the right monitor also via HDMI to the Mac. This already allowed me to easily switch between the input sources with the monitor's on-screen menu.

While researching a new monitor, which has a build in KVM, but only comes with software for Mac & Windows, I found out that you can control most monitor functionality via DCC. 

This includes things like brightness, contrast, rotation, and most importantly the input source. 

For Linux, you can use ddcutil and your window manager keyboard shortcut settings. For me, it is these two commands, your monitor and sources may vary.

ddcutil -d 1 setvcp 0x60 0x0f # display 1 -> displayport

ddcutil -d 1 setvcp 0x60 0x11 # display 1 -> hdmi

On OS X you can use BetterDisplay, this is a pretty nifty tool to control all kinds of aspects of your display, definitely worth a look. It also supports keyboard shortcuts to change input sources.

BetterDisplay screenshot

There you go, easy-peasy and for free. I hope that helps someone, or me in the future, when I forget how it works.


Roll your own static blog analytics

Posted by Major Hayden on April 04, 2024 12:00 AM
Static blogs are easy to serve, but so many of the free options have no analytics whatsoever. This post talks about how to serve your own blog from a container with live updating analytics

One Week With KDE Plasma Workspaces 6 on Fedora 40 Beta (Vol. 1)

Posted by Stephen Gallagher on April 03, 2024 11:41 PM

Why am I doing this?

As my readers may be aware, I have been a member of the Fedora Engineering Steering Committee (FESCo) for over a decade. One of the primary responsibilities of this nine-person body is to review the Fedora Change Proposals submitted by contributors and provide feedback as well as being the final authority as to whether those Changes will go forth. I take this responsibility very seriously, so when this week the Fedora KDE community brought forth a Change Proposal to replace GNOME Desktop with KDE Plasma Workspaces as the official desktop environment in the Fedora Workstation Edition, I decided that I would be remiss in my duties if I didn’t spend some serious time considering the decision.

As long-time readers of this blog may recall, I was a user of the KDE desktop environment for many years, right up until KDE 4.0 arrived. At that time, (partly because I had recently become employed by Red Hat), I opted to switch to GNOME 2. I’ve subsequently continued to stay with GNOME, even through some of its rougher years, partly through inertia and partly out of a self-imposed responsibility to always be running the Fedora/Red Hat premier offering so that I could help catch and fix issues before they got into users’ and customers’ hands. Among other things, this led to my (fairly well-received) series of blog posts on GNOME 3 Classic. As it has now been over ten years and twenty(!) Fedora releases, I felt like it was time to give KDE Plasma Workspaces another chance with the release of the highly-awaited version 6.0.

How will I do this?

I’ve committed to spending at least a week using KDE Plasma Workspaces 6 as my sole working environment. This afternoon, I downloaded the latest Fedora Kinoite installer image and wrote it to a USB drive.1 I pulled out a ThinkPad I had lying around and went ahead with the install process. I’ll describe my setup process a bit below, but (spoiler alert) it went smoothly and I am typing up this blog entry from within KDE Plasma.

What does my setup look like?

I’m working from a Red Hat-issued ThinkPad T490s, a four-core Intel “Whiskey Lake” x86_64 system with 32 GiB of RAM and embedded Intel UHD 620 graphics. Not a powerhouse by any means, but only about three or four years old. I’ve wiped the system completely and done a fresh install rather than install the KDE packages by hand onto my usual Fedora Workstation system. This is partly to ensure that I get a pristine environment for this experimen and partly so I don’t worry about breaking my existing system.

Thoughts on the install process

I have very little to say about the install process. It was functionally identical to installing Fedora Silverblue, with the minimalist Anaconda environment providing me some basic choices around storage (I just wiped the disk and told it to repartition it however it recommends) and networking (I picked a pithy hostname: kuriosity). That done, I hit the “install” button, rebooted and here we are.

First login

Upon logging in, I was met with the KDE Welcome Center (Hi Konqi!), which I opted to proceed through very thoroughly, hoping that it would provide me enough information to get moving ahead. I have a few nitpicks here:

First, the second page of the Welcome Center (the first with content beyond “this is KDE and Fedora”) was very sparse, saying basically “KDE is simple and usable out of the box!” and then using up MOST of its available screen real estate with a giant button directing users to the Settings app. I am not sure what the goal is here: it’s not super-obvious that it is a button, but if you click on it, you launch an app that is about as far from “welcoming” as you can get (more on that later). I think it might be better to just have a little video or image here that just points at the settings app on the taskbar rather than providing an immediate launcher. It both disrupts the “Welcome” workflow and can make less-technical users feel like they may be in over their heads.

<figure class="wp-block-image size-large"></figure>

I actually think the next page is a much better difficulty ramp; it presents some advanced topics that they might be interested in, but it doesn’t look quite as demanding of them and it doesn’t completely take the user out of the workflow.

<figure class="wp-block-image size-large"></figure>

Next up on the Welcome Center was something very welcome: an introduction to Discover (the “app store”). I very much like this (and other desktop environments could absolutely learn from it). It immediately provides the user with an opportunity to install some very popular add-ons.2

<figure class="wp-block-image size-large"></figure>

The next page was a bit of a mixed bag for me. I like that the user is given the option to opt-in to sharing anonymous user information, but I feel like the slider and the associated details it provided are probably a bit too much for most users to reasonably parse. I think this can probably be simplified to make it more approachable (or at least bury the extra details behind a button; I had to extend the window from its default size to get a screenshot).

<figure class="wp-block-image size-large"></figure>

At the end of the Welcome Center was a page that gave me pause: a request for donations to the KDE project. I’m not sure this is a great place for it, since the user hasn’t even spent any time with the environment at all yet. It seems a bit too forwards with asking for donations. I’m not sure where a better place is, but getting begged for spare change minutes after installing the OS doesn’t feel right. I think that if we were to make KDE the flagship desktop behind Fedora Workstation, this would absolutely have to come out. I think it gives a bad first impression. I think a far better place to leave things would be the preceding page:

<figure class="wp-block-image size-large"></figure>

OK, so let’s use it a bit!

With that out of the way, I proceeded to do a bit of setup for personal preferences. I installed my preferred shell (zsh) and some assorted CLI customizations for the shell, vi, git, etc. This was identical to the process I would have followed for Silverblue/GNOME, so I won’t go into any details here. I also have a preference for touchpad scrolling to move the page (like I’m swiping a touch-screen), so I set that as well. I was confused for a bit as it seemed that wasn’t having an effect, but I realized I had missed that “touchpad” was a separate settings page from “mouse” and had flipped the switch on the wrong devices. Whoops!

In the process of setting things up to my liking, I did notice one more potential hurdle for newcomers: the default keyboard shortcuts for working with desktop workspaces are different from GNOME, MacOS and Windows 11. No matter which major competitor you are coming from, this will cause muscle-memory stumbles. It’s not that any one approach is better than another, but the fact that they are all completely different makes me sigh and forces me to think about how I’m interacting with the system instead of what I want to do with it. Unfortunately, KDE did not make figuring this out easy on me; even when I used the excellent desktop search feature to find the keyboard shortcut settings, I was presented by a list of applications that did not clearly identify which one might contain the system-wide shortcuts. By virtue of past experience with KDE, I was able to surmise that the KWin application was the most likely place, but the settings app really didn’t seem to want to help me figure that out. Then, when I selected KWin, I was presented with dozens of pages of potential shortcuts, many of which were named similarly to the ones I wanted to identify. This was simply too many options with no clear way to sort them. I ended up resorting to trying random combinations of ctrl, alt, meta and shift with arrow keys until I eventually stumbled upon the correct set.

Next, I played around a bit with Discover, installing a pending firmware update for my laptop (which hadn’t been turned on in months). I also enabled Flathub and installed Visual Studio Code to see how well Flatpak integration works and also for an app that I know doesn’t natively use Wayland. That was how I discovered that my system had defaulted to a 125% fractional scaling setup. In Visual Studio Code, everything looked very slightly “off” compared to the rest of the system. Not in any way I could easily put my finger to, until I remembered how badly fractional scaling behaved on my GNOME system. I looked into the display settings and, sure enough, I wasn’t at an integer scaling value. Out of curiosity, I played around with the toggle for whether to have X11 apps scale themselves or for the system to do it and found that the default “Apply scaling themselves” was FAR better looking in Visual Studio Code. At the end of the day, however, I decided that I preferred the smaller text and larger available working area afforded me by setting the scaling back to 100%. That said, if my eyesight was poorer or I needed to sit further away from the screen, I can definitely see the advantages to the fractional scaling and I was very impressed by how sharp it managed to be. Full marks on that one!

I next went to play around in Visual Studio Code with one of my projects, but when I tried to git clone it, I hit an issue where it refused my SSH key. Digging in, I realized that KDE does not automatically check for keys in the default user location (~/.ssh) and prompt for their passphrases. I went ahead and used ssh-add to manually import them into the SSH keyring and moved along. I find myself going back and forth on this; on the one hand, there’s a definite security tradeoff inherent in allowing the desktop to prompt (and offer to save) the passphrase in the desktop keyring (encrypted by your login password). I decline to save mine persistently, preferring to enter it each time. However, there’s a usability tradeoff to not automatically at least launching an askpass prompt. In any case, it’s not really an issue for me to make this part of my usual toolbox entry process, but I’m a technical user. Newbies might be a bit confused if they’re coming from another environment.

I then went through the motions of getting myself signed in to the various messaging services that I use on a daily basis, including Fedora’s Matrix. Once signed in there via Firefox, I was prompted to enable notifications, which I did. I then discovered the first truly sublime moment I’ve had with Plasma Workspaces: the ephemeral notifications provided by the desktop. The way they present themselves, off to the side and with a vibrant preview window and show you a progress countdown until they vanish is just *chef’s kiss*. If I take nothing else away from this experience, it’s that it is possible for desktop notifications to be beautiful. Other desktops need to take note here.

I think this is where I’m going to leave things for today, so I’ll end with a short summary: As a desktop environment, it seems to do just about everything I need it to do. It’s customizable to the point of fault: it’s got so many knobs to twist that it desperately needs a map (or perhaps a beginner vs. expert view of the settings app). Also, the desktop notifications are like a glass of icy lemonade after two days lost in the desert.

  1. This was actually my first hiccough: I have dozens of 4 GiB thumbdrives lying around, but the Kinoite installer was 4.2 GiB, so I had to go buy a new drive. I’m not going to ding KDE for my lack of preparedness, though! ↩
  2. Unfortunately I hit a bug here; it turns out that all of those app buttons will just link to the updates page in Discover if there is an update waiting. I’m not sure if this is specific to Kinoite yet. I’ll be investigating and filing a ticket about it in the appropriate place. ↩

fwupd and xz metadata

Posted by Richard Hughes on April 03, 2024 08:39 AM

A few people (and multi-billion dollar companies!) have asked for my response to the xz backdoor. The fwupd metadata that millions of people download every day is a 9.5MB XML file — which thankfully is very compressible. This used to be compressed as gzip by the LVFS, making it a 1.6MB download for end-users, but in 2021 we switched to xz compression instead.

What actually happens behind the scenes is that the libxmlb library loads the optionally compressed metadata into a mmap-able binary blob, and then it gets used by fwupd to look for new updates for specific hardware. In libxmlb 0.3.3 we added support for xz as a compression format. Then fwupd 1.8.7 was released with xz support, preferring the xz format to the “legacy” gz format — as the metadata became a 1.1MB download, saving significant amounts of data from the CDN.

Then this week we learned that xz wasn’t the kind of thing we want to depend on. Out of an abundance of caution (and to be clear — my understanding is there is no fwupd or LVFS security problem of any kind) I’ve switched the LVFS to also generate zstd metadata, make libxmlb no longer hard depend on lzma and switched fwupd to prefer the zstd metadata over the xz metadata if the installed version of libjcat supports it. The zstd metadata is also ~3% smaller than xz (and faster to decompress), but the real benefit is that I now trust it a lot more than xz.

I’ll be doing new libxmlb and fwupd releases with the needed changes next week.

LWN subscription slots available for Fedora contributors

Posted by Fedora Community Blog on April 03, 2024 08:00 AM

Linux Weekly News — or “LWN” — is a small, independent website dedicated to covering Linux and open source topics. There’s really nothing like it — from daily updates from different communities (including, of course, Fedora) to deep-dives into technical topics to reporting from various conferences and invents. Red Hat funds a subscription for Fedora community members, of which we currently have two open slots. There are also about a number of people who haven’t logged in for several years… I intend to remove these to make room if there is enough interest to warrant that.

Note that if you work for Red Hat, there is different LWN subscription, which should be automatically activated when you log in to the site from a company network or VPN. (You do have to do this periodically to keep it active, though.) So, priority for Fedora community members goes to non-Red Hatters. And, of course, if you can get your employer to pay, or want to help out with your own subscription, LWN could always use the help.

Also, all content is made available for free after several weeks — and, there is even a way for subscribers to make sharing links available that give free access to even new articles.

Claim a Fedora LWN subscription

But, if you don’t have access another way, you are active in Fedora, and you would make use of the subscription… post a reply here with your LWN username and I’ll hook you up.

The post LWN subscription slots available for Fedora contributors appeared first on Fedora Community Blog.

Untitled Post

Posted by Zach Oglesby on April 03, 2024 03:12 AM

Even though I am not afraid of heights, my little brain was telling me to walk softly.

The syslog-ng health check

Posted by Peter Czanik on April 02, 2024 01:56 PM

Version 4.2 of syslog-ng introduced a healthcheck option to syslog-ng-ctl. It prints three syslog-ng-related metrics on screen – if it can reach syslog-ng, that is. You can use it from scripts to monitor the health of syslog-ng.

https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-health-check

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Join us! Restarting Community Ops team meetings in 2024

Posted by Fedora Community Blog on April 02, 2024 08:00 AM

After a long pause since 2019, the Fedora Community Operations (CommOps) team will hold an inaugural meeting in support of the recently-approved Community Ops 2024 Reboot Community Initiative. Please fill the meeting day/time poll if you are interested by Thursday, 11 April 2024.

Bringing back the team in 2024

After a bit of a break—thanks to *cough cough* COVID—the Community Ops team is revving up again! We’ve got big plans, like the Fedora Linux 40 Release Party in May and diving into the Community Initiative proposed from February to October this year.

The first meeting will be co-chaired by Alberto Rodriguez Sanchez (bt0dotninja), Robert Wright (rwright), and Justin W. Flory (jflory7).

But here’s the deal: we need your help. We want to invite the community to be a part of the new team. This includes community members who have either been around for a long time or a short time.

Fill in the meeting day/time poll

So, help us mark the calendars! Our first meeting is happening between April 15th and 19th, 2024. But we need your input to lock down the perfect time. Fill the meeting day/time poll and let us know what works for you. A decision on the team meeting date/time will be posted on Friday, 12 April 2024.

Let’s make this restart a success together! If you have questions, drop a comment to this post on Fedora Discussion.

The post Join us! Restarting Community Ops team meetings in 2024 appeared first on Fedora Community Blog.

Fedora back to SoCal for 2024

Posted by Alejandro Acosta on April 02, 2024 12:35 AM

We’ve just had another magnificent event this past March in beautiful Pasadena for the 21st edition of the Southern California Linux Expo, the largest community-driven linux and open source conference in North America. Fedora and its crew are proud of have been participated since the conference’s 8th edition back in 2009!. It may sound like a no biggie, but as the conference grows older, bigger and more important, so does the commitment, responsability and even the logistics to provide a memorable experience to the visitors.

This is not the first time that I act as event owner, but this is the first time that I experienced with the most intensity the responsability of having everything ready and putting all together in order to meet the expectations of the reputation that preceeds us. I don’t think I’ve talked about it in any of my past reports, but there are a lots of things that need to be arranged before Day 1, this is why we start preparing everything in the Fall of the previous year. Just to mention some of the most relevant: to poll among Fedorians for interest/intention of attending the event and to work for our project during the event, to create the event page, to create the budget estimation, to create the budget approval and follow up, to request the swag to give away in the booth, request the shipping of the event box for booth setup, to contact the organizers to request for participation and follow up, to request the creation of the Fedora Badge for the event and to print the poster for scanning it, and a lot more!!. I don’t pretend to take full credit for all of these, but I’d like to use this opportunity to thank to the fantastic Fedora crew that made it possible. Thank you Perry Rivera for helping me with many of this tasks and for volunteering as co-owner of the event. Thank you Brian Monroe for assisting with the event box and swag shipping to your personal address and also for your remarkable booth duty. Thank you Scott Williams for the passion in the project and the way you keep people interested in it. And thank you Justin Flory for all your support

Thank you guys, you Rock!

So, here we are, Thursday afternoon, one day before the exhibition floor opens.

<figure class="alignright size-large is-resized"></figure>

This day is used for co-located events like Kubernetes Community Day, DevOps LA, Nixcon, Ubucon, etc.I made sure that we have everything ready for the booth setup, not only the Fedora stuff but also the essentials: power strips, chairs, networking, trash can, etc. After this, I had the opportunity to attend a few talks and workshops of the ongoing events. I personally found KWAAI and NixOS very interesting and worth of taking a more detailed look at them.

Our good friends Carl George and Shaun McCaunce from the CentOS Project offered a talk on CentOS and a workshop on packaging that resulted in great interest from the audience and helped to clear the air and to clarify the relationship between Fedora, CentOS and RHEL.

<figure class="wp-block-image size-large"></figure>

Later in the afternoon we had some time for networking in a cocktail offered by Kwaii, and we had a chance to talk and spend some time with our Fedora Project Leader Matthew Miller.

On Friday we were all set for the Exhibit Hall, just waiting for it to open so we could receive the visitors to our booth. Friday is typically a busy day and this year was no exception, we had a lot of visitor in our booth and many of them stayed for a while, commenting on their experience with Fedora or asking for specific question ot even tech doubts.

<figure class="wp-block-image size-large"></figure>

We closed the day with a mexican dinner with all of the crew and our friends from CentOS, a delightful evening that prepared us for the longest day of the conference: Saturday.

<figure class="wp-block-image size-large"></figure>

And along came Saturday. This day is busy because Exhibit Hall opens from 10:00 to 18:00 and it is when we have the largest number of visitors. There are three things that I’d like to highlight from this day that made somehow a difference with previous editions.

The first is that we were giving away raffle tickets to our booth’s visitors for some of the swag that we had, and this created a different environmet, new expectations, people gathered at the times of the raffles and -for many- it was a way to identify us and keep us under their radar.

<figure class="wp-block-image size-large"></figure>

Another highlight is that this day they scheduled the talks of our Fedora ambassadors and our Fedora Leaders discussion panel, unfortunatelly they were scheduled at the same time and it was hard to attend them live 😦

RedHat’s Brian Proffitt took care of our booth so I could be able to get a few pictures of both. Thank you Brian !

<figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-large"></figure>

And the third highlight is a bit personal, since I was alone in charge of the booth while my friends were at their talks, I had to receive a couple of guided tours that normally are students or newbies to Linux and pitch Fedora in less than a minute. The pressure to compress so many things I had to say about Fedora and to summarize them in so little time 😀

<figure class="wp-block-image size-large"></figure>

Sunday is the most quiet day, Exhibit Hall closes at 14:00 and most exhibitors are wrapping up or ran out of swag, not our case 🙂 We ran a few raffles and wrapped up as well.

I am very satisfied with the outcome of this edition. We have now a mean for continuos communication between the crew, we have started processing new ideas (like bringing new and fresh faces to promote Fedora) and started working on them, we’d like to share our experience with SCaLE during these fourteen editions that we have participated to a wider Fedora audience -thinking about Flock- and continue improving the Fedora presence and contributing for its acceptance. I’m excited for the future.

Week 13 in Packit

Posted by Weekly status of Packit Team on April 02, 2024 12:00 AM

Week 13 (March 27th – April 2nd)

  • The default behaviour of changelog entry generation has been changed to comply with Fedora Packaging Guidelines (see the relevant Fedora Packaging Committee discussion). From now on, the default changelog entry is "- Update to version \<version>". Users can still affect this behaviour using custom commands in the changelog-entry action or with the copy_upstream_release_description configuration option. (packit#2253)

  • "[packit]" prefix has been removed from default dist-git commit message titles in order to prevent unnecessary noise in autogenerated changelog. Users can override this using the commit-message action. (packit#2263)

Untitled Post

Posted by Zach Oglesby on April 01, 2024 02:55 PM

I’m not sure how else to say this but a spa/hot tub/bath filled with herbs and tea is amazing. I am so relaxed now.

XZ Bonus Spectacular Episode

Posted by Josh Bressers on April 01, 2024 12:03 PM

Josh and Kurt talk about the recent events around XZ. It’s only been a few days, and it’s amazing what we already know. We explain a lot of the basics we currently know with the attitude much of these details will change quickly over the coming week. We can’t fix this problem as it stands, we don’t know where to start yet. But that’s not a reason to lose hope. We can fix this if we want to, but it won’t be flashy, it’ll be hard work.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3359-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/XZ_Bonus_Spectacular_Episode.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/XZ_Bonus_Spectacular_Episode.mp3</audio>

Show Notes

Fedora Linux Flatpak cool apps to try for April

Posted by Fedora Magazine on April 01, 2024 08:00 AM

This article introduces projects available in Flathub with installation instructions.

Flathub is the place to get and distribute apps for all of Linux. It is powered by Flatpak, allowing Flathub apps to run on almost any Linux distribution.

Please read “Getting started with Flatpak“. In order to enable flathub as your flatpak provider, use the instructions on the flatpak site.

These apps are classified into four categories:

  • Productivity
  • Games
  • Creativity
  • Miscellaneous

Norka

In the Productivity section we have Norka. Norka is a distraction free writing app, that allows you to focus on the writing part and not on the formatting or any distractions.

Features:

  • Local storing of notes.
  • Magically saved at any moment.
  • Easily exportable to HTML, Docx, and PDF in one click.
  • Markdown support.
  • It’s themeable.
<figure class="wp-block-image size-full"></figure>

You can install “Norka” by clicking the install button on the web site or manually using this command:

flatpak install flathub com.github.tenderowl.norka

0 A.D.

In the Games section we have 0 A.D. This is an old timer champion in the Open Source world. It’s an historical Real Time Strategy (RTS) game currently under development by Wildfire Games (a global group of volunteer game developers). As the leader of an ancient civilization, you must gather the resources you need to raise a military force and dominate your enemies.

There are thirteen factions: Three of the Hellenic States (Athens, Sparta and Macedonia), two of the kingdoms of Alexander the Great’s successors (Seleucids and Ptolemaic Egyptians), two Celtic tribes (Britons and Gauls), the Romans, the Persians, the Iberians, the Carthaginians, the Mauryas and the Kushites. Each civilization is complete with substantially unique artwork, technologies and civilization bonuses.

This was the second game I ever play on Linux, and I’m so happy that the development is still in progress. If my memories are ok, this starts as a AAA game mod that was later taken into a stand-alone game. It is full featured and way too fun.

<figure class="wp-block-image size-large"></figure>

You can install “0 A.D.” by clicking the install button on the web site or manually using this command:

flatpak install flathub com.play0ad.zeroad

0 A.D., of course, is also available as rpm on fedora’s repositories

Gaupol

In the Miscellaneous section we have Gaupol. This is an editor for text-based video subtitle files. It helps you with tasks such as creating and translating subtitles, timing subtitles to match video and correcting common errors.

<figure class="wp-block-image size-large"></figure>

You can install “Gaupol” by clicking the install button on the web site or manually using this command:

flatpak install flathub io.otsaloma.gaupol

Gaupol is also available as rpm on fedora’s repositories

Darktable

In the Creativity section we have Darktable. Darktable is an open source photography workflow application and raw developer. A virtual light-table and darkroom for photographers. It manages your digital negatives in a database, lets you view them through a zoomable light-table, and enables you to develop raw images and enhance them.

  • Non-destructive editing workflow.
  • Functions operate on 4×32-bit floating point pixel buffers.
  • GPU accelerated image processing.
  • Professional color management.
  • Support for image formats, like JPEG, CR2, NEF, HDR, PFM, and RAF.
  • Automate repetitive tasks with Lua scripts.
<figure class="wp-block-image size-large"></figure>

You can install “darktable” by clicking the install button on the web site or manually using this command:

flatpak install flathub org.darktable.Darktable

darktable is also available as rpm on fedora’s repositories

Episode 422 – Do you have a security.txt file?

Posted by Josh Bressers on April 01, 2024 12:00 AM

Josh and Kurt talk about the security.txt file. It’s not new, but it’s not something we’ve discussed before. It’s a great idea, an easy format, and well defined. It’s not high on many of our todo lists, but it’s something worth doing.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3351-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_422_Do_you_have_a_securitytxt_file.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_422_Do_you_have_a_securitytxt_file.mp3</audio>

Show Notes

Untitled Post

Posted by Zach Oglesby on March 31, 2024 02:30 AM

Can’t wait to get inside!

Coming Out as Trans

Posted by Hari Rana (TheEvilSkeleton) on March 31, 2024 12:00 AM

Vocabularies

Before I delve into my personal experience, allow me to define several key terms:

  • Sex: Biological characteristics of males and females.
  • Gender: Social characteristics of men and women, such as norms, roles, and behaviors.
  • Gender identity: How you personally view your own gender.
  • Gender dysphoria: Sense of unease due to a mismatch between gender identity and sex assigned at birth.
  • Transgender (trans): The gender identity differs from the sex assigned at birth. If someone’s gender identity is woman but their sex assigned at birth is male, then they are generally considered a trans person.
  • Cisgender (cis): The opposite of transgender; when the gender identity fits with the sex assigned at birth.
  • Non-binary: Anything that is not exclusively male or female. Imagine if male and female were binary numbers: male is 0 and female is 1. Anything that is not 0 or 1 is considered non-binary. If I see myself as the number 0.5 or 2, then I’m non-binary. Someone who considers themself to be between a man and woman would be between 0 and 1 (e.g. 0.5).
  • Agender: Under the umbrella of non-binary; it essentially means non-gendered (lack of gender identity) or gender neutral. Whichever definition applies varies from person to person. It’s also worth noting that many agender people don’t consider themselves trans.
  • Label: Portraying which group you belong to, such as “non-binary”, “transfemme” (trans (binary and non-binary) people who are feminine), etc.

Backstory

Allow me to share a little backstory. I come from a neighborhood where being anti-LGBTQ+ was considered “normal” a decade ago. This outlook was quite common in the schools I attended, but I wouldn’t be surprised if a considerably significant portion of the people around here are still anti-LGBTQ+ today. Many individuals, including former friends and teachers, have expressed their opposition to LGBTQ+ in the past, which influenced my own view against the LGBTQ+ community at the time.

Due to my previous experiences and the environment I live(d) in, I tried really hard to avoid thinking about my sexuality and gender identity for almost a decade. Every time I thought about my sexuality and gender identity, I’d do whatever I could to distract myself. I kept forcing myself to be as masculine as possible. However, since we humans have a limit, I eventually reached a limit to the amount of thoughts I could suppress.

I always struggled with communicating and almost always felt lonely whenever I was around the majority of people, so I pretended to be “normal” and hid my true feelings. About 5 years ago, I began to spend most of my time online. I met people who are just like me, many of which I’m still friends with 3-4 years later. At the time, despite my strong biases against LGBTQ+ from my surroundings, I naturally felt more comfortable within the community, far more than I did outside. I was able to express myself more freely and have people actually understand me. It was the only time I didn’t feel the need to act masculine. However, despite all this, I was still in the mindset of suppressing my feelings. Truly an egg irl moment

Eventually, I was unable hold my thoughts anymore, and everything exploded. All I could think about for a few months was my gender identity: my biases between my childhood environment often clashed with me questioning my own identity, and whether I really saw myself as a man. I just had these recurring thoughts and a lot of anxiety about where I’m getting these thoughts from, and why.

Since then, my work performance got exponentially worse by the week. I quickly lost interest in my hobbies, and began to distance myself from communities and friends. I often lashed out on people because my mental health was getting worse. My sleep quality was also getting worse, which only worsened the situation. On top of that, I still had to hide my feelings, which continued to exhaust me. All I could think about for months was my gender identity.

After I slowly became comfortable with and accepting of my gender identity, I started having suicidal thoughts on a daily basis, which I was able to endure… until I reached a breaking point once again. I was having suicidal thoughts on a bi-hourly basis. It escalated to hourly, and finally almost 24/7. I obviously couldn’t work anymore, nor could I do my hobbies. I needed to hide my pain because of my social anxiety. I also didn’t have the courage to call the suicide hotline either. What happened was that I talked to many people, some of whom have encouraged and even helped me seek professional help.

However, that was all in the past. I feel much better and more comfortable with myself and the people I opened up to, and now I’m confident enough to share it publicly 😊

Coming Out‎ ‎🏳️‍⚧️

I identify as agender. My pronouns are any/all — I’ll accept any pronouns. I don’t think I have a preference, so feel free to call me as whatever you want; whatever you think fits me best :)

I’m happy with agender because I feel disconnected from my own masculinity. I don’t think I belong at either end of the spectrum (or even in between), so I’m pretty happy that there is something that best describes me.

Why the Need to Come Out Publicly?

So… why come out publicly? Why am I making a big deal out of this?

Simply put, I am really proud and relieved for discovering myself. For so long, I tried to suppress my thoughts and force myself to be someone I was fundamentally not. While that never worked, I explored myself instead and discovered that I’m trans. However, I also wrote this article to explain how much it affected me for living in a transphobic environment, even before I discovered myself.

For me, displaying my gender identity is like displaying a username or profile picture. We choose a username and profile picture when possible to give a glimpse of who we are.

I chose “TheEvilSkeleton” as my username because I used to play Minecraft regularly when I was 10 years old. While I don’t play Minecraft anymore, it helped me discover my passion: creating and improving things and working together — that’s why I’m a programmer and contribute to software. I chose Chrome-chan as my profile picture because I think she is cute and I like cute things :3. I highly value my username and profile picture, the same way I now value my gender identity.

Am I Doing Better?

While I’m doing much better than before, I did go through a depressive episode that I’m still recovering from at the time of writing, and I’m still processing the discovery because of my childhood environment, but I certainly feel much better after discovering myself and coming out.

However, coming out won’t magically heal the trauma I’ve experienced throughout my childhood environment. It won’t make everyone around me accept who I am, or even make them feel comfortable around me. It won’t drop the amount of harassment I receive online to zero — if anything, I write this with the expectation that I will be harassed and discriminated against more than ever.

There will be new challenges that I will have to face, but I still have to deal with the trauma, and I will have to deal with possible trauma in the future. The best thing I can do is train myself to be mentally resilient. I certainly feel much better coming out, but I’m still worried about the future. I sometimes wish I wasn’t trans, because I’m genuinely terrified about the things people have gone through in the past, and are still going through right now.

I know I’m going to have to fight for my life now that I’ve come out publicly, because apparently the right to live as yourself is still controversial in 2024.

Seeking Help

Of course, I wasn’t alone in my journey. What helped me get through it was talking to my friends and seeking help in other places. I came out to several of my friends in private. They were supportive and listened to me vent; they reassured me that there’s nothing wrong with me, and congratulated me for discovering myself and coming out.

Some of my friends encouraged and helped me seek professional help at local clinics for my depression. I have gained more confidence in myself; I am now capable to call clinics by myself, even when I’m nervous. If these suicidal thoughts escalate again, I will finally have the courage to call the suicide hotline.

If you’re feeling anxious about something, don’t hesitate to talk to your friends about it. Unless you know that they’ll take it the wrong way and/or are currently dealing with personal issues, they will be more than happy to help.

I have messaged so many people in private and felt much better after talking. I’ve never felt so comforted by friends who try their best to be there for me. Some friends have listened without saying anything, while some others have shared their experiences with me. Both were extremely valuable to me, because sometimes I just want (and need) to be heard and understood.

If you’re currently trying to suppress your thoughts and really trying to force yourself into the gender you were assigned at birth, like I was, the best advice I can give you is to give yourself time to explore yourself. It’s perfectly fine to acknowledge that you’re not cisgender (that is, if you’re not). You might want to ask your trans friends to help you explore yourself. From experience, it’s not worth forcing yourself to be someone you’re not.

Closing Thoughts

I feel relieved about coming out, but to be honest, I’m still really worried about the future of my mental health. I really hope that everything will work out and that I’ll be more mentally resilient.

I’m really happy that I had the courage to take the first steps, to go to clinics, to talk to people, to open up publicly. It’s been really difficult for me to write and publish the article. I’m really grateful to have wonderful friends, and legitimately, I couldn’t ask for better friends.

Use Dante to proxy web traffic

Posted by Fabio Alessandro Locati on March 31, 2024 12:00 AM
A while ago, I posted about using SSH to proxy traffic within a Nebula network context. In the last few months, I changed my implementation because SSH required some steps and accesses that I was not fully happy with. In the previous iteration, I was using SSH as a SOCKS proxy. The problem, though, is that I need to set up the connection every time and use my SSH credentials, so it becomes difficult to have it always on.

Untitled Post

Posted by Zach Oglesby on March 30, 2024 06:59 AM

Beautiful park near my hotel in Tokyo.

Picture of pound with a small, treed island in the middle.

CVE-2024-3094: Urgent alert for Fedora Linux 40 and Rawhide users

Posted by Fedora Magazine on March 29, 2024 10:32 PM

The Fedora Council was notified of CVE-2024-3094 on Friday, March 29th related to the xz tools and libraries. At this time, Fedora Rawhide users are likely to have received the tainted package. Fedora Linux 40 pre-release users may have received the tainted package build xz-5.6.0-2.fc40 through the updates-testing repository (5.6.0-2.fc40). Fedora Linux 39 and 38 users are also NOT impacted.

Any instances of Fedora Linux 40 and Fedora Rawhide pre-releases before March 29th should be considered as potentially COMPROMISED. Fedora Linux 40 was updated to xz-5.4.6-3.fc40 on Friday, March 29th at 04:33:13 UTC (see f40 Bodhi update). Fedora Rawhide was reverted to xz-5.4.6-3.fc41 on Friday, March 29th at 15:21:19 UTC (see f41 Bodhi update). As a reminder, Fedora Rawhide is the development distribution of Fedora Linux, and serves as the basis for future Fedora Linux builds (in this case, the yet-to-be-released Fedora Linux 41).

CVE-2024-3094 highlights

Red Hat Product Security published this description of the recently-discovered vulnerability:

Malicious code was discovered in the upstream tarballs of xz, starting with version 5.6.0. Through a series of complex obfuscations, the liblzma build process extracts a prebuilt object file from a disguised test file existing in the source code, which is then used to modify specific functions in the liblzma code. This results in a modified liblzma library that can be used by any software linked against this library, intercepting and modifying the data interaction with this library.

https://access.redhat.com/security/cve/CVE-2024-3094

A detailed update history of the xz package across all currently-supported Fedora/EPEL branches is available on Fedora Bodhi. While the malicious update was never sent to Fedora Linux 40 stable repositories, Fedora Linux 40 pre-release users (including beta users) may have received it from the updates-testing repositories, which are enabled by default in pre-release versions of Fedora Linux to assist with testing.

Fedora Linux 40 Beta users can mitigate this vulnerability now. The downgraded version is in the Fedora Linux 40 stable repositories. If you update now, you get the downgraded one from the stable repositories. (If the package is not shown in a dnf upgrade, some users report dnf distro-sync as correctly pulling in the downgraded package.)

An extended discussion on the oss-security mailing list provides more detail about the nature of the attack and how it was initially discovered. More information about this vulnerability can also be found on the Red Hat Blog.

Thank you first responders!

Fedora has a reputation on being First, but it would not be possible without the Friends who make it possible. It is difficult to predict when news like this may land. Many Fedora contributors have also already gone on vacation for the upcoming holiday weekend. We appreciate the hours that many have already put in and continue putting in to address this problem and ultimately protect Fedora users from malicious software. Thanks to the timely and prompt action by our packaging and infrastructure community, all users running on stable update channels only were NOT impacted by this vulnerability.

Special recognition goes out to our Fedora Infrastructure Team for coordinating these prompt and timely actions to reduce end-user impact. We could not do it without you!

As a reflection on the upcoming release of Fedora Linux 40, there remains a lot of uncertainty about this exploit. It appears to be a sophisticated breach of trust that may have taken place over an extended period of time. Fedora Linux 40 is around the corner, which is also distinguished from other Fedora releases because Fedora Linux 40 is the branch point for CentOS Stream 10, the next major version of Enterprise Linux. Therefore, if this exploit had been discovered even two or three months later, this vulnerability would also have impacted downstream builds from Fedora and CentOS Stream, including Red Hat Enterprise Linux (RHEL), AlmaLinux, Rocky Linux, Amazon Linux, Oracle Linux, and others.

The prompt actions of our Fedora community first responders and Infrastructure Team are an example of our community working at its best. Thanks for helping keep the Fedora user community safe.

Get in touch about CVE-2024-3094

This is an emerging story and there will be more news and updates about this vulnerability to the xz package set. You can follow your usual channels for updates on security vulnerabilities. You can also reach out to the Fedora developer community on the devel mailing list or #devel:fedoraproject.org on Matrix.


2024-03-30 edit: Fedora community first responders were made aware on Thursday, March 28th.

2024-04-01 edit: Edited to clarify exposure of Fedora Linux 40 users to the tainted package. The updates-testing repository is enabled on ALL pre-release variants including the beta release of Fedora Linux 40.

2024-04-04 edit: Edited the opening paragraph to clarify the specific Fedora Linux 40 package version vulnerable to the exploit, and that F40 users MAY or MAY NOT have received the tainted version.

Wiping Drives - Data Recovery with Open-Source Tools (part 6)

Posted by Steven Pritchard on March 29, 2024 06:26 PM

This is part 6 of a multi-part series.  See part 1 for the beginning of the series.

Wiping drives

To properly wipe a drive so it is effectively unrecoverable, the best solution is to use DBAN. It can be downloaded from https://sourceforge.net/projects/dban/.

Note from 2024: The DBAN project is mostly dead. Currently I would recommend nwipe, which is available in the standard package repositories for a number of Linux distributions, from source at https://github.com/martijnvanbrummelen/nwipe, or on bootable media like SystemRescue.  In fact, SystemRescue has a page in their documentation on this very topic.

In many cases, it is sufficient to simply zero out the entire drive. This can be done using dd_rescue.

To zero out /dev/sda, you can use the following command:

dd_rescue -D -b 1M -B 4k -m $(( $( blockdev --getsz /dev/sda ) / 2 ))k /dev/zero /dev/sda

This uses a bit of a shell scripting trick to avoid multiple commands and copy & paste, but it is still fairly simple. The output of blockdev --getsz gives us the size of the device in 512-byte blocks, so we divide that number by 2 to get the size in 1kB blocks, which we pass to the -m option (with a trailing k) to denote kB) to specify the maximum amount of data to transfer. Using a default block size of 1MB (-b) with a fallback of 4kB (-B, to match the host page size, which is required for direct I/O) should give us decent throughput.

Note that we're using -D to turn on direct I/O to the destination drive (/dev/sda), but we're not using direct I/O (-d) to read /dev/zero since /dev/zero is a character device that does not support direct I/O.

To just clear the MS-DOS partition table (and boot sector) on /dev/sda, you could do the following:

dd if=/dev/zero of=/dev/sda count=1

To be continued in part 7.

RAID - Data Recovery with Open-Source Tools (part 7)

Posted by Steven Pritchard on March 29, 2024 06:25 PM

This is part 7 of a multi-part series. See part 1 for the beginning of the series.

Software RAID

It's becoming increasingly common (in 2009) on desktop PCs to use some form of BIOS-based software RAID. In most cases dealing with a single-drive failure in a software RAID isn't terribly difficult. For example, with NVIDIA's software RAID, even when one drive out of a stripe (RAID 0) set fails, if the drive is recoverable, you can simply clone it to a new identically-sized drive and the RAID will just work. Unfortunately, this isn't so simple with Intel's software RAID, which appears to store the serial numbers of the drives in the RAID metadata, meaning an exact clone won't work. While it would most likely be possible to simply edit the RAID metadata using hexedit to update the drive information, a somewhat simpler solution is to make a backup clone of the drives in the array, then re-create the RAID exactly as it was before in the RAID BIOS, then boot into Linux and run testdisk on the RAID device. More on that in part 8.

Most often the RAID metadata for drives in a software RAID volume is stored toward the end of the drive. In some cases, if you are forced to clone a failing RAID drive to a larger drive, you can make Linux (and maybe the BIOS and Windows) see the drive as a RAID device by copying the last few blocks from the failing drive to the last few blocks of the replacement drive.

old_end=$(( $( blockdev --getsz /dev/sda ) / 2 ))
end=$(( $( blockdev --getsz /dev/sdb ) / 2 ))
dd_rescue -d -D -b 4k -B 4k -s $(( $old_end - 1024 ))k -S $(( $end - 1024 ))k /dev/sdb /dev/sdb

Hardware RAID

Unfortunately, the ways that hardware RAID controllers store metadata don't tend to be quite as predictable as software RAID. If you attach a hardware RAID member drive to a non-RAID controller, some of the tricks mentioned above might work, but there are by no means any guarantees.

Also be aware that hardware RAID controllers are very likely to take a drive offline at the first sign of an error rather than report back the error and continue as most non-RAID controllers would. While this makes hardware RAID controllers largely unusable for data recovery, it does mean that a failing RAID member drive is quite likely to be recoverable.

To be continued in part 8.

Revision control and sheet music

Posted by Adam Young on March 29, 2024 03:38 PM

Musescore is a wonderful tool. It has made a huge impact on my musical development over the past couple decades. Sheet music is the primary way I communicate and record musical ideas, and Musescore the tool and musecore.com have combined to make a process that works for me and my musical needs.

I have also spent a few years writing software, and the methods that we have learned to use in software development have evolved due to needs of scale and flexibility. I would like to apply some of those lessons to how I manage sheet music. But there are disconnects.

The biggest issue I have is that I want the same core song in multiple different but related formats. The second biggest issue is that I want to be able to make changes to a song, and to collaborate with other composers in making those changes.

The Sheet music I work with is based on the western notation system. I use a fairly small subset of the overall notation system, as what I am almost always working towards is a musical arrangement for small groups. The primary use case I have is for lead sheets: a melody line and a series of chords for a single instrument. Often, this is for a horn player. I need to be able to transpose the melody line into the appropriate key and range for that particular horn: E flat for a Baritone sax, C for a Flute, B flat for a saxophone, C but in Bass clef for a Trombone.

The second use case I have is to be able to arrange a section harmonically for a set of instruments. This means reusing the melody from a lead sheet, but then selecting notes for the other instruments to play as accompaniment. Often, this can be done on piano first, and then allocation the notes the piano plays out to the single-note horns.

I also wish to make play-along recordings, using ireal pro. This uses a very small subset of the lead sheet information: really just the chords and the repeats. A more complex version could be used to build MMA versions.

When I work with a more complex arrangement, I often find that I change my mind on some aspect: I wish to alter a repeat, a chord symbol, or the set of notes in the melody line.ideally that would be reflected through all the various aspects of the score.

The musescore files are stored in an xml format developed for the program. This XML file is compressed, and the the file extension reflects this: mscz. I have a song that is 40 measures long (32 with 8 bars repeated) and no more than 2 chords per measure, no more than 8 notes per measure. The underlying document used to store this song, when transposed for 6 different instruments is 22037 lines long. This is not a document format meant to be consumed by humans.

Here is a sample of how a single note is represented:

        <Articulation name="marcatoStaccato">
          <velocity>120</velocity>
          <gateTime>50</gateTime>
          </Articulation>

Here is an example for a chord

 <Chord>
            <linkedMain/>
            <durationType>quarter</durationType>
            <Note>
              <linkedMain/>
              <pitch>74</pitch>
              <tpc>16</tpc>
              </Note>
            </Chord>

This is generated code. When software is responsible for creating more software, it will often product the output in a format that can then be consumed by another tool designed to read human readable source and convert it to binary. XML is a notoriously chatty format, and the number of lines in the Musescore file reflects that.

The “document” that we humans interface with is based on pen and paper. If I were to do this by hand, I would start with a blank page of pre-printed staff paper, and annotate on it such details as the clef, the key signature, the bar lines, the notes, and the chord symbols. This system is optimized for hand-written production, and human reading-on-the-stage consumption. This is how it is displayed on the screen as well. Why, then is the underlying file format so unfriendly?

Because file formats are based on text, and we have restricted that to very specific rules as well: Characters are represented as numeric values (ascii now unicode) and anything implying the layout on the page needs to be embedded in the document as well.

There are options for turn text into sheet music: ABC and Lilypond are both formats that do this. Why don’t I use those? I have tried in the past. But when I am writing music, I am thinking in terms of notation, not in terms of text, and they end up preventing me from doing what I need. They don’t solve the transposition or other problems as is. Perhaps the issue is that we need more tooling around one of these format, but then we return to the problem of generated code.

Once the sheet music is printed out, the options for annotating a revision is to write on the sheet music, or to edit the original source and reprinting it. In practice, both of these are done often. IN front of me on my desk is a copy of a tune where both chords and individual notes have been changed in pencil. After living with these revisions for quite some time, I eventually got them recorded back into the source file and reprinted it.

Carrying around reams of sheet music quickly becomes unfeasible. If you are in multiple groups, each of which have a large repertoire, the need to reduce the physical burden will likely lead you to an electronic solution: sheet music displayed on a tablet. However, the way that you distribute sheet music here will determine what options you have allowed for the artist to annotate corrections and instructions on the music in front of them: most PDFs don’t let you edit them, and you cannot write on a screen with a ball point pen.

As a band director, I would like to be able to note changes on a score for a particular group and automate the distribution of that change to the group.

As a band member I would like to be able to make changes to a score and have them show up when I view the score electronically. Ideally, these changes would be reflected in the ireal/MMA version that I use to practice as well as the sheet music I read.

As a collaborator, I would like to be able to submit a suggested change to a score as played by my group and have the music director be able to incorporate my suggestion into the score.

As a collaborator, I would like to be able to receive suggestions from another collaborator, and provide feedback on the suggestion if I decide not to include it as-is.

As an arranger, I would like to be able to come up with a score on piano and automate the process of converting it to a set of instruments.

As a band leader, I would like to be able to come up with a lead sheet in Musescore and automate the process of converting it to MMA and irealpro formats. I would like to be able to import the accompaniment from an irealpro or MMA format into a musescore document as a first pass for the percussion or other rhythm section arrangement.

I had long thought about an application to make it possible to work on music together with other people. At West Point we used to say Collaborate and Graduate. For this, I say:

Collaborate and Orchestrate.

Fish shell

Posted by Christiano Anderson on March 29, 2024 01:56 PM
After using ZSH for a couple of years, I decided to switch back to the Fish shell. Fish works out of the box, batteries included, and the basic installation provides all the features I expect from a shell environment. For over 20 years, I have been using Unix-based systems, including SCO Unix, Solaris, and BSDi. I have been using Linux since 1996, and Slackware was the first Linux distribution I used, and now I’m a happy Fedora user.

Infra and RelEng Update – Week 13 2024

Posted by Fedora Community Blog on March 29, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 25 March – 29 March 2024

<figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/03/IR_Weekly_13-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2560", "targetHeight": "1914", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra and RelEng Update – Week 13 2024 appeared first on Fedora Community Blog.

DNF 5 and Modularity

Posted by Remi Collet on March 29, 2024 08:39 AM

In an enterprise distribution, such as RHEL, because of the very long life cycle (10 years or more), there is 2 opposite needs:

  • stability, which means keeping the same version for the whole life of the distribution (ex EL-7 still provides PHP 5.4)
  • new versions needed by new projects (ex: a lot of projects now require PHP 8)

So, this means we need to be able to distribute alternative versions in a safe way.

This of course also affects my repository, which has the goal to provide more alternative versions and extensions.

This is not a need for Fedora which has a very sort life cycle (6 months), so with no need for newer versions in a stable release.

1. The old time

Until EL-7, the main solution was to create 1 optional repository per version (ex: the RHWAS channel in EL-4).

This was not perfect, mostly working only for newer versions, raising conflicts because 2 versions were available in active repositories.

In EL-5 using different names was tried (e.g. php version 5.1 and php53 for version 5.3), this was a real nightmare, and this has been abandoned.

2. Software Collections

A nice idea appears in EL-6 and EL-7 to provide alternative versions in a separated RPM namespace, installed in a separated tree (/opt), allowing the installation of various versions simultaneously.

Mostly because of some design faults, the community rejected this and the project was abandoned in EL-8 (excepted for newer GCC, in devtoolset).

As initial design issues were fixed, I really appreciate SCL and still use them, and provides them in my repository, see My PHP development Workstation, mostly because I like being able to install various versions simultaneously.

3. Modularity

EL-8 introduced modularity, a new way to manage alternative versions in optional streams. When a stream is disabled its packages are ignored by dnf, when it is enabled its packages are preferred. This works very well for both newer and older versions.

3.1. Everything is module

In EL-8 the idea was to provide everything as modules. This was probably a terrible mistake that caused the community to reject it again.

Indeed, especially for libraries this probably doesn't make sense. This also creates a complex dependency tree, and a very complex build system (MBS).

3.2. Module only for alternatives

In EL-9 everything was greatly simplified. The base system works without modules that are only used for alternative versions (ex: PHP 8.0 by default, but 8.1 and 8.2 are available as modules).

This is probably how modularity should have been used from the beginning. And this works really smoothly. Also, MBS is not really required in this simple scheme, with a simple build configuration being enough.

But it was too late, and the community (mostly the Fedora one) had already killed it.

4. DNF version 5

This is the successor of DNF version 4 which introduces modules. But, as Fedora chose to stop using modules, the needed features are not implemented.

For now, dnf5 only supports enabling/disabling streams, but this is far from usable, and perhaps everything related to modularity will be dropped in the final version.

4.1. Fedora 40

In the upcoming Fedora 40, dnf is still version 4 by default, and dnf5 is also available for test.

Module management still works, despite a small regression which has a workaround.

4.2. Fedora 41

In the future Fedora 41, dnf version 5 should become the default, probably without modularity.

5. My repository

I plan to continue to provide modules for Fedora 40 and probably EL-10, with dnf 4.

I need to think about later versions, having to switch back to the old way (1 repo per version) makes me terribly sad and gives me nightmares.

I've read a proposal to switch back to provide alternative versions under a different namespace. Which seems like switching 10 years back, with a broken solution.

6. Conclusion

Of course, I dream of seeing Modularity support maintained in dnf 5 ;)

I'm disappointed with the bad Fedora community feedback on solutions proposed to solve Enterprise-only needs.

And what a waste of developer energy on these features (SCL and Modularity)

PHP version 8.2.18RC1 and 8.3.5RC1

Posted by Remi Collet on March 29, 2024 06:10 AM

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.

RPMs of PHP version 8.3.5RC1 are available

  • as base packages
    • in the remi-modular-test for Fedora 38-40 and Enterprise Linux ≥ 8
    • in the remi-php83-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

RPMs of PHP version 8.2.18RC1 are available

  • as base packages
    • in the remi-modular-test for Fedora 38-40 and Enterprise Linux ≥ 8
    • in the remi-php82-test repository for Enterprise Linux 7
  • as SCL in remi-test repository

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngPHP version 8.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation: follow the wizard instructions.

emblem-notice-24.png Announcements:

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Update of system version 8.3 (EL-7) :

yum --enablerepo=remi-php83,remi-php83-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.2 (EL-7) :

yum --enablerepo=remi-php82,remi-php82-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module switch-to php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png Notice:

  • version 8.3.5RC1 is also in Fedora rawhide for QA
  • EL-9 packages are built using RHEL-9.3
  • EL-8 packages are built using RHEL-8.9
  • EL-7 packages are built using RHEL-7.9
  • oci8 extension uses the RPM of the Oracle Instant Client version 21.13 on x86_64 or 19.19 on aarch64
  • intl extension uses libicu 73.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.2.18 and 8.3.5 are planed for April 11th, in 2 weeks.

Software Collections (php82, php83)

Base packages (php)

Please stop using VPN services for privacy!

Posted by Fabio Alessandro Locati on March 29, 2024 12:00 AM
For many years, VPN companies have advertised their VPNs as a necessary tool for all people who want to preserve their privacy. For the same amount of time, I tried to explain to the people that this view made no sense if not for those company’s sales. As an example, Onavo, a Meta subsidiary, used to advertise its services, highlighting that, among other advantages, using their product “protects your personal info”.

Fedora Workstation 40 – what are we working on

Posted by Christian F.K. Schaller on March 28, 2024 06:56 PM
So Fedora Workstation 40 Beta has just come out so I thought I share a bit about some of the things we are working on for Fedora Workstation currently and also major changes coming in from the community.

Flatpak

Flatpaks has been a key part of our strategy for desktop applications for a while now and we are working on a multitude of things to make Flatpaks an even stronger technology going forward. Christian Hergert is working on figuring out how applications that require system daemons will work with Flatpaks, using his own Sysprof project as the proof of concept application. The general idea here is to rely on the work that has happened in SystemD around sysext/confext/portablectl trying to figure out who we can get a system service installed from a Flatpak and the necessary bits wired up properly. The other part of this work, figuring out how to give applications permissions that today is handled with udev rules, that is being worked on by Hubert Figuière based on earlier work by Georges Stavracas on behalf of the GNOME Foundation thanks to the sponsorship from the Sovereign Tech Fund. So hopefully we will get both of these two important issues resolved soon. Kalev Lember is working on polishing up the Flatpak support in Foreman (and Satellite) to ensure there are good tools for managing Flatpaks when you have a fleet of systems you manage, building on the work of Stephan Bergman. Finally Jan Horak and Jan Grulich is working hard on polishing up the experience of using Firefox from a fully sandboxed Flatpak. This work is mainly about working with the upstream community to get some needed portals over the finish line and polish up some UI issues in Firefox, like this one.

Toolbx

Toolbx, our project for handling developer containers, is picking up pace with Debarshi Ray currently working on getting full NVIDIA binary driver support for the containers. One of our main goals for Toolbx atm is making it a great tool for AI development and thus getting the NVIDIA & CUDA support squared of is critical. Debarshi has also spent quite a lot of time cleaning up the Toolbx website, providing easier access to and updating the documentation there. We are also moving to use the new Ptyxis (formerly Prompt) terminal application created by Christian Hergert, in Fedora Workstation 40. This both gives us a great GTK4 terminal, but we also believe we will be able to further integrate Toolbx and Ptyxis going forward, creating an even better user experience.

Nova

So as you probably know, we have been the core maintainers of the Nouveau project for years, keeping this open source upstream NVIDIA GPU driver alive. We plan on keep doing that, but the opportunities offered by the availability of the new GSP firmware for NVIDIA hardware means we should now be able to offer a full featured and performant driver. But co-hosting both the old and the new way of doing things in the same upstream kernel driver has turned out to be counter productive, so we are now looking to split the driver in two. For older pre-GSP NVIDIA hardware we will keep the old Nouveau driver around as is. For GSP based hardware we are launching a new driver called Nova. It is important to note here that Nova is thus not a competitor to Nouveau, but a continuation of it. The idea is that the new driver will be primarily written in Rust, based on work already done in the community, we are also evaluating if some of the existing Nouveau code should be copied into the new driver since we already spent quite a bit of time trying to integrate GSP there. Worst case scenario, if we can’t reuse code, we use the lessons learned from Nouveau with GSP to implement the support in Nova more quickly. Contributing to this effort from our team at Red Hat is Danilo Krummrich, Dave Airlie, Lyude Paul, Abdiel Janulgue and Phillip Stanner.

Explicit Sync and VRR

Another exciting development that has been a priority for us is explicit sync, which is critical for especially the NVidia driver, but which might also provide performance improvements for other GPU architectures going forward. So a big thank you to Michel Dänzer , Olivier Fourdan, Carlos Garnacho; and Nvidia folks, Simon Ser and the rest of community for working on this. This work has just finshed upstream so we will look at backporting it into Fedora Workstaton 40. Another major Fedora Workstation 40 feature is experimental support for Variable Refresh Rate or VRR in GNOME Shell. The feature was mostly developed by community member Dor Askayo, but Jonas Ådahl, Michel Dänzer, Carlos Garnacho and Sebastian Wick have all contributed with code reviews and fixes. In Fedora Workstation 40 you need to enable it using the command

gsettings set org.gnome.mutter experimental-features "['variable-refresh-rate']"

PipeWire

Already covered PipeWire in my post a week ago, but to quickly summarize here too. Using PipeWire for video handling is now finally getting to the stage where it is actually happening, both Firefox and OBS Studio now comes with PipeWire support and hopefully we can also get Chromium and Chrome to start taking a serious look at merging the patches for this soon. Whats more Wim spent time fixing Firewire FFADO bugs, so hopefully for our pro-audio community users this makes their Firewire equipment fully usable and performant with PipeWire. Wim did point out when I spoke to him though that the FFADO drivers had obviously never had any other consumer than JACK, so when he tried to allow for more functionality the drivers quickly broke down, so Wim has limited the featureset of the PipeWire FFADO module to be an exact match of how these drivers where being used by JACK. If the upstream kernel maintainer is able to fix the issues found by Wim then we could look at providing a more full feature set. In Fedora Workstation 40 the de-duplication support for v4l vs libcamera devices should work as soon as we update Wireplumber to the new 0.5 release.

To hear more about PipeWire and the latest developments be sure to check out this interview with Wim Taymans by the good folks over at Destination Linux.

Remote Desktop

Another major feature landing in Fedora Workstation 40 that Jonas Ådahl and Ray Strode has spent a lot of effort on is finalizing the remote desktop support for GNOME on Wayland. So there has been support for remote connections for already logged in sessions already, but with these updates you can do the login remotely too and thus the session do not need to be started already on the remote machine. This work will also enable 3rd party solutions to do remote logins on Wayland systems, so while I am not at liberty to mention names, be on the lookout for more 3rd party Wayland remoting software becoming available this year.

This work is also important to help Anaconda with its Wayland transition as remote graphical install is an important feature there. So what you should see there is Anaconda using GNOME Kiosk mode and the GNOME remote support to handle this going forward and thus enabling Wayland native Anaconda.

HDR

Another feature we been working on for a long time is HDR, or High Dynamic Range. We wanted to do it properly and also needed to work with a wide range of partners in the industry to make this happen. So over the last year we been contributing to improve various standards around color handling and acceleration to prepare the ground, work on and contribute to key libraries needed to for instance gather the needed information from GPUs and screens. Things are coming together now and Jonas Ådahl and Sebastian Wick are now going to focus on getting Mutter HDR capable, once that work is done we are by no means finished, but it should put us close to at least be able to start running some simple usecases (like some fullscreen applications) while we work out the finer points to get great support for running SDR and HDR applications side by side for instance.

PyTorch

We want to make Fedora Workstation a great place to do AI development and testing. First step in that effort is packaging up PyTorch and making sure it can have working hardware acceleration out of the box. Tom Rix has been leading that effort on our end and you will see the first fruits of that labor in Fedora Workstation 40 where PyTorch should work with GPU acceleration on AMD hardware (ROCm) out of the box. We hope and expect to be able to provide the same for NVIDIA and Intel graphics eventually too, but this is definitely a step by step effort.

Dors/Cluc and DevConf.cz: Two open source events worth visiting

Posted by Bogomil Shopov - Bogo on March 28, 2024 11:45 AM

I hit 10k reads (posted on three platforms) on my article about the three events you should visit in Bulgaria focused on Free and open-source software (FOSS). I decided to expand your knowledge with two more that I can recommend.

I know that 10k hits are nothing, but I am proud of the results for such a niche topic.

Here are my next two proposals:

Dors/Cluc

Zagreb, Croatia
15-19 May, 2024

Let me start by introducing you to a great event that has been organized for over 30 years—yes, 30! The organizers are proud that this is Europe’s oldest conference on Gnu/Linux and free software.

It’s held in Zagreb, Croatia, and offers many ways to learn new stuff – Sessions, workshops, small mini-events on a particular topic, and many ways to network and meet people.

Why don’t you combine your thirst for knowledge with a trip to Zagreb where, apart from the food, drinks, and history, you will understand why there are chandeliers from a Las Vegas Casino in a Cathedral?

The team is running a 30% discount campaign for the next few days.

DevConf

Brno, Czechia
13-15 June, 2024

When I started living in the Czech Republic, the people from Prague tried to convince me that the city of Brno was a hoax and that it didn’t exist. I am still not convinced, and I plan to go and visit the DevConf this year to change my mind :)

Apart from the obvious joke, the DevConf in Brno has been held almost every year since 2009. The topics might vary throughout the years, but the hero is always the open source. The primary sponsor is Redhat, and you might see more focus on technologies and principles related to the software company, but this is usually okay.

This year’s conference will last three days and include ten different themes, including the good ol’ AI.

Attendance is free of charge, and no registration is required. Visit Brno, ensure it’s real, meet many new people, and learn something new.


P.S. I am not associated with any of the events; I just want to support their enormous effort

A new provisioning tool built with mgmt

Posted by James Just James on March 27, 2024 08:58 PM
Today I’m announcing a new type of provisioning tool. This is both the culmination of a long road, and the start of a new era. Please read on for all of the details. Feel free to skip to the relevant sections you’re interested in if you don’t want all of the background. Ten years: The vision for this specific tool started around ten years ago. Previously, as a sysadmin, I spent a lot of my time using a configuration management tool called puppet.

Alerting on One Identity Cloud PAM Essentials logs using syslog-ng

Posted by Peter Czanik on March 27, 2024 01:22 PM

One Identity Cloud PAM Essentials is the latest security product by One Identity. It provides asset management as well as secure and monitored remote access for One Identity Cloud users to hosts on their local network. I had a chance to test PAM Essentials while still in development. While there, I also integrated it with syslog-ng.

From my previous blog, you could learn what PAM Essentials is, and how you can collect its logs using syslog-ng. This blog will show you how to work with the collected log messages and create alerts when somebody connects to a host on your local network using PAM Essentials.

https://www.syslog-ng.com/community/b/blog/posts/alerting-on-one-identity-cloud-pam-essentials-logs-using-syslog-ng

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Build custom images for Testing Farm

Posted by Fedora Magazine on March 27, 2024 08:00 AM

You may know the Testing Farm from the article written by David Kornel and Jakub Stejskal. That article highlighted the primary advantages of this testing system. You should review the earlier article, as this article will not go through the basics of the Testing Farm usage. It will only delve into the reasons for utilizing your custom images and explore potential automation methods with Hashicorp Packer.

AWS images

The Testing Farm automatically deploys Amazon Web Services (AWS) machines using a default set of Amazon Machine Images (AMIs) available to users. This curated subset includes popular community images such as CentOS and Fedora Linux. While these AMIs typically represent bare operating systems, they don’t have to remain in that state.

Think of these AMIs as analogous to container images. You have the flexibility to embed all installation and configuration steps directly into the image itself. By doing so, you can preconfigure the environment, ensuring that everything is ready before the actual Testing Farm job begins.

The Trade-Off

However, there’s a trade-off. While customizing AMIs streamlines the process, building them manually can be challenging and time-consuming. The effort involved in creating a well-prepared AMI is substantial.

In an upcoming section of this article, we’ll delve into a practical solution. We’ll explore how to use Hashicorp Packer, a powerful tool for creating machine images, and illustrate its application in the context of the Debezium project.

Benefits of custom images for Testing Farm

There might be some confusion surrounding the rationale for creating custom images, especially considering the investment of time, effort, and resources. However, this question is highly relevant, and the answer lies in a straightforward concept: time efficiency.

Imagine you are testing web applications within containers. You must deploy the database, web server, and other supporting systems each time you perform testing. For instance, when testing against an Oracle database, the container image alone can be nearly 10 GB. Pulling this image for every pull request (PR) takes several minutes.

By building a custom Amazon Machine Image (AMI) that includes this giant image, you eliminate the need to pull it repeatedly. This initial investment pays off significantly in the long run. Additionally, there’s another advantage: reducing unnecessary information exposure to developers. With a preconfigured system, developers can focus solely on the tests without being burdened by extraneous details.

In summary, custom images streamline the testing process, enhance efficiency, and provide a cleaner development experience for your team. Of course, this solution might not be ideal for all use cases and should be used only if it adds value to your testing scenarios. For example, if you are testing packages for Fedora Linux or CentOS and integration with it, you should always use the latest available image on Testing Farm to mitigate the risks associated with a custom image being outdated.

Automate the process with Packer

The trade-off when considering using custom images is that you must create them. This requirement might discourage some developers from pursuing this route. However, there’s good news: Packer significantly improves this experience.

Initially developed by HashiCorp, Packer is a powerful tool for creating consistent Virtual Machine Images for various platforms. AWS (Amazon Web Services) is one of the supported platforms. Virtual Machine images used in an AWS environment are called AMI (Amazon Machine Images).

Descriptions of image builds are written in HCL format and provide a rich set of provisioners. These provisioners act as plugins, allowing developers to execute specific tools within the machine from which Packer generates the image snapshot.

Among the most interesting provisioners are:

  • File — Copies files from the current machine to the EC2 instance.
  • Shell — Executes shell scripts within the machine.
  • Ansible — Enables direct execution of Ansible playbooks on the machine.

In the sections that follow, we’ll explore practical examples and how Packer can enhance your image-building process.

Debezium use-case

So far, we have discussed the reasons for using custom images and why you should automate the build, but how can you do that? Let’s showcase this on the actual project! We onboarded Testing Farm with the Debezium project last year. Debezium is the de facto industrial standard for CDC (Change Data Capture) streaming. Debezium currently supports about fourteen databases, each with a different setup and hardware needs, but if there is one common feature, it is memory consumption. Suppose those databases run in the container with a minimal amount of RAM (Random Access Memory). In that case, they tend to do things like flushing to disk, etc., and those things are very annoying for testing because you need to rerun the tests after the failures with longer wait times or other workarounds.

Because of that, we have moved part of the testing to the Testing Farm, where we ask for sufficient hardware to ensure databases have enough space and RAM so tests are “stable”. One of the supported databases for Debezium is Oracle DBMS. As was pointed out earlier, Oracle’s container images are quite large, so we had to build the AMI image to give our community the fastest feedback on the PRs.

Firstly, we have started working on the Ansible playbooks, installing everything necessary to run the database and our test suite. This Ansible playbook looks like this:

# oracle_docker.yml

---

- name: Debezium testing environment playbook
  hosts: 'all'
  become: yes
  become_method: sudo

  tasks:
  - name: Add Docker-ce repository
    yum_repository:
      name: docker-ce
      description: Repository from Docker
      baseurl: https://download.docker.com/linux/centos/8/x86_64/stable
      gpgcheck: no

  - name: Update all packages
    yum:
      name: "*"
      state: latest
      exclude: ansible*

  - name: Install dependencies
    yum:
      name: ['wget', 'java-17-openjdk-devel', 'make', 'git', 'zip', 'coreutils', 'libaio']
      state: present

  - name: Install Docker dependencies
    yum:
      name: ['docker-ce', 'docker-ce-cli', 'containerd.io', 'docker-buildx-plugin']
      state: present

  - name: Unzip oracle libs
    unarchive:
      src: /tmp/oracle-libs.zip
      dest: /root/
      remote_src: true

  - name: Install Oracle sqlplus
    shell: |
      wget https://download.oracle.com/otn_software/linux/instantclient/2113000/oracle-instantclient-basic-21.13.0.0.0-1.el8.x86_64.rpm -O sqlplus-basic.rpm
      wget https://download.oracle.com/otn_software/linux/instantclient/2113000/oracle-instantclient-sqlplus-21.13.0.0.0-1.el8.x86_64.rpm -O sqlplus-adv.rpm
      rpm -i sqlplus-basic.rpm
      rpm -i sqlplus-adv.rpm

  - name: Prepare Oracle script
    copy:
      src: /tmp/install-oracle-driver.sh
      dest: /root/install-oracle-driver.sh
      remote_src: true

  - name: Make executable
    shell: chmod +x /root/install-oracle-driver.sh

  - name: Install maven
    shell: |
      mkdir -p /usr/share/maven /usr/share/maven/ref
      curl -fsSL -o /tmp/apache-maven.tar.gz https://apache.osuosl.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz
      tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1
      rm -f /tmp/apache-maven.tar.gz
      ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

  - name: Start docker daemon
    systemd:
      name: docker
      state: started
      enabled: true

  - name: Pull Oracle images from quay
    shell: |
      docker pull custom.repo/oracle:{{ oracle_tag }}
      when: use_custom|bool == true

  - name: Pull Oracle images from official repository
    shell: |
      docker pull container-registry.oracle.com/database/free:23.3.0.0
    when: use_custom|bool == false

  - name: Logout from registries
    shell: |
      docker logout quay.io
    when: use_quay|bool == true

As you can see, this playbook does everything:

  • Docker installation
  • SQLPlus installation
  • Running some side Oracle init script
  • Installing Maven and all other test suite dependencies
  • Pulling the image

Once all those steps are finished, the machine should be fully prepared to run the test suite and start the database. We can create an image from this machine snapshot. OK, now it’s time to look at the packer descriptor.

# ami-build.pkr.hcl
packer {
required_plugins {
amazon = {
source = "github.com/hashicorp/amazon"
version = "~> 1.2.6"
}
ansible = {
source = "github.com/hashicorp/ansible"
version = "~> 1"
}
}
}

variable "aws_access_key" {
type = string
sensitive = true
}

variable "aws_secret_key" {
type = string
sensitive = true
}

variable "aws_region" {
type = string
default = "us-east-2"
}

variable "aws_instance_type" {
type = string
default = "t3.small"
}

variable "aws_ssh_username" {
type = string
default = "centos"
}

variable "image_name" {
type = string
}

variable "oracle_image" {
type = string
}

variable source_ami {
type = string
default = "ami-080baaeff069b7464"
}

variable "aws_volume_type" {
type = string
default = "gp3"
}

source "amazon-ebs" "debezium" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
source_ami = var.source_ami
region = var.aws_region
force_deregister = true
force_delete_snapshot = true
instance_type = var.aws_instance_type
ssh_username = var.aws_ssh_username
ami_name = var.image_name
ami_users = ["125523088429"]


# choose the most free subnet which matches the filters
# https://www.packer.io/plugins/builders/amazon/ebs#subnet_filter
subnet_filter {
filters = {
"tag:Class": "build"
}
most_free = true
random = false
}

launch_block_device_mappings {
device_name = "/dev/sda1"
delete_on_termination = "true"
volume_type = var.aws_volume_type
volume_size = 30
}

}

build {
sources = ["source.amazon-ebs.debezium"]
name = "debezium-oracle-packer"

provisioner "file" {
source = "./provisioners/files/oracle-libs.zip"
destination = "/tmp/oracle-libs.zip"
}

provisioner "file" {
source = "./provisioners/files/install-oracle-driver.sh"
destination = "/tmp/install-oracle-driver.sh"
}

provisioner "shell" {
script = "./provisioners/scripts/bootstrap.sh"
}

provisioner "ansible" {
playbook_file = "./provisioners/ansible/oracle_docker.yml"
extra_arguments = [ "-vv", "-e oracle_tag="${var.oracle_image}"" ]
# Required workaround for Ansible 2.8+
# https://www.packer.io/docs/provisioners/ansible/ansible#troubleshooting
use_proxy = false
}
}

The descriptor above contains all the information necessary for the Packer to build the AMI image. At the start you can see the definitions of all the variables. These are mostly just configuration or sensitive information. Next, you find the configuration of the Amazon plugin (this allows the AMI build). You can see that besides casual configurations like secrets and regions, you also must pass source_ami. This field defines the base image for our build. For Debezium, we are using CentOS Stream 8.

The next important field is ssh_username. That field can be very tricky because you can find more variants of the username for some distros. For CentOS, it is usually centos or ec2-user. Be careful setting this because debugging during the build process is challenging.

The last important thing, specifically regarding Testing Farm, is the ami_users field. This field contains an array of users with whom Packer will share the new AMI. This step is necessary to use your image in the Testing Farm environment.

The last part of the descriptor contains all the provisioners you want to run before the AMI build starts. For Debezium, we just copy some libraries and init scripts, run the bootstrap script (this script installs initial dependencies – EPEL and Ansible; you can find it below), and trigger the ansible-playbook (showcased above).

# bootstrap.sh
#!/bin/bash

set -ex

sudo yum install -y epel-release
sudo yum install -y ansible

Once all provisioners and descriptors are complete, you put those in the correct file structure. On Debezium, we are use the following:

testing-farm-build
├── ami-build.pkr.hcl
└── provisioners
├── ansible
│ ├── oracle_docker.yml
│ └── variables.yml
├── files
│ └── install-oracle-driver.sh
└── scripts
└── bootstrap.sh

Then, you just have to step into the root directory (testing-farm-build) and start the build. You can begin the packer build with the following command:

packer build -var="aws_secret_key=${AWS_SECRET_ACCESS_KEY}" -var="aws_access_key=${AWS_ACCESS_KEY_ID}" -var="image_name=${AMI_NAME}" -var="aws_ssh_username=centos" . 

You can pass whatever variables you want directly into the command. If you do not want to export some information as an environment variable, do not include it here, and the packer will automatically ask you for it during the build process.

Once your AMI is built, you are only one step away before you can use your image in the Testing Farm environment. You have to open the PR on the Testing Farm infrastructure repository and make the following additions: Add a new regex matcher for your AMI names into the image map – for example
Add your AWS account ID as the new image owner from which Testing Farm will gather images – for example.

After Testing Farm maintainers merge those PRs, your images will be available for provisioning in a couple of minutes. Once they are ready, you should be able to see them here.

Conclusion

Building your custom image for the Testing Farm unlocks a world of possibilities for enhancing your testing workflow. Creating a tailored image can accelerate test runs and provide targeted feedback to your community. And best of all, the entire image build process can be seamlessly automated using Packer with minimal effort. This article should be a helpful guide for fellow Testing Farm users looking to optimize their experience. If you have any questions or need assistance during setup, feel free to reach out — I’m here to help!

Cockpit 314

Posted by Cockpit Project on March 27, 2024 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 314 and cockpit-ostree 201:

Diagnostic reports: Fix command injection vulnerability with crafted report names

Cockpit 270 introduced a possible local privilege escalation vulnerability with deleting diagnostic reports (sosreport). Files in /var/tmp/ are controllable by any user. In particular, an unprivileged user could create an sosreport* file containing a ' and a shell command, which would then run with root privileges when the admin Cockpit user tried to delete the report.

This Cockpit version fixes the problem by removing the files with direct system calls instead of a shell command.

This is tracked as CVE-2024-2947. If you need to backport this to older cockpit versions, you can apply the upstream patch.

If you cannot update or patch, then check the displayed report file names for non-standard characters, in particular ', $, ( and `, and don’t use Cockpit’s Diagnostic reports page to delete them.

Storage: Improvements to read-only encrypted filesystems

Cockpit now unlocks encrypted filesystems with a “read-only” encryption layer when the filesystem itself is mounted read-only.

Ostree: Show OCI container origin

cockpit-ostree now detects and shows the origin, repository, and branch name of native container repositories in both the “OSTree source” card and the deployment list:

screenshot of show oci container origin

screenshot of show oci container origin

Try it out

Cockpit 314 and cockpit-ostree 201 are available now:

Fedora Linux 40 Beta est disponible pour les tests

Posted by Charles-Antoine Couret on March 26, 2024 02:28 PM

En ce mardi 26 mars, la communauté du Projet Fedora sera ravie d'apprendre la disponibilité de la version Beta de Fedora Linux 40.

Malgré les risques concernant la stabilité d’une version Beta, il est important de la tester ! En rapportant les bogues maintenant, vous découvrirez les nouveautés avant tout le monde, tout en améliorant la qualité de Fedora Linux 40 et réduisant du même coup le risque de retard. Les versions en développement manquent de testeurs et de retours pour mener à bien leurs buts.

La version finale est pour le moment fixée pour le 16 ou 23 avril.

Expérience utilisateur

  • Passage à GNOME 46 ;
  • L'environnement de bureau KDE Plasma change de version majeure avec sa nouvelle version 6 ;
  • Le fichier firefox.desktop est renommé en org.mozilla.firefox.desktop pour permettre son utilisation dans la barre de recherche de GNOME.

Gestion du matériel

  • Fourniture de ROCm 6 pour améliorer la prise en charge de l'IA et le calcul haute performance pour les cartes graphiques AMD ;
  • Passage à l'étape 2 de la prise en charge du noyau unifié nommée UKI (donc unifiant noyau, initrd, ligne de commande du noyau et signature) pour les plateformes avec UEFI mais rien ne change par défaut à ce sujet.

Internationalisation

  • Le gestionnaire d'entrée de saisie IBus passe à la version 1.5.30 ;
  • Mise à jour de ibus-anthy 1.5.16 pour la saisie du japonais.

Administration système

  • NetworkManager tente de détecter par défaut les conflits d'usage d'adresse IPv4 avec le protocole Address Conflict Detection avant de l'attribuer à la machine ;
  • NetworkManager va utiliser une adresse MAC aléatoire par défaut pour chaque réseau Wifi différent, et cette adresse sera stable pour un réseau donné. Cela permet de concilier vie privée et confort d'utilisation ;
  • Les unités système de systemd vont utiliser par défaut beaucoup d'options pour améliorer la sécurité des services ;
  • Les entrées des politiques SELinux qui font référence au répertoire /var/run font maintenant référence au répertoire /run ;
  • L'outil SSSD ne prend plus en charge les fichiers permettant de gérer les utilisateurs locaux ;
  • DNF ne téléchargera plus par défaut la liste des fichiers fournie par les différents paquets ;
  • L'outil fwupd pour mettre à jour les firmwares va utiliser passim comme cache pour partager sur le réseau local les métadonnées liées aux mises à jour disponibles pour les firmwares ;
  • Les systèmes Fedora Silverblue et Kinoite disposent de bootupd pour la mise à jour du chargeur de démarrage ;
  • Le paquet libuser est marqué en voie de suppression pour Fedora 41 alors que le paquet passwd est supprimé ;
  • Le paquet cyrus-sasl-ntlm a été supprimé ;
  • La gestion des droits utilisateurs pam_userdb passe de la base de données BerkeleyDB à GDBM ;
  • Le filtre antispam bogofilter utilise SQLite au lieu de BerkeleyDB pour gérer sa base de données interne ;
  • Le serveur LDAP 389 passe de la version 2.4.4 à la version 3.0.0 ;
  • Le paquet iotop est remplacé par iotop-c ;
  • L'orchestrateur de conteneurs Kubernetes évolue de la version 1.28 à la version 1.29 ;
  • Par ailleurs ses paquets sont restructurés ;
  • Pendant que podman est mis à jour vers la version 5 ;
  • Le paquet wget2 remplace le paquet wget en fournissant une nouvelle version ;
  • Le gestionnaire de base de données PostgreSQL migre vers sa 16e version ;
  • Les paquets MySQL et MariaDB sont remaniés et mis à jour vers la version 10.11.

Développement

  • Mise à jour de la suite de compilation GNU : GCC 14.0, binutils 2,41, glibc 2.39 et gdb 14.1 ;
  • La suite de compilateurs LLVM est mise à jour à la version 18 ;
  • Mise à jour de la bibliothèque C++ Boost à la version 1.83 ;
  • Le langage Go passe à la version 1.22 ;
  • Le JDK de référence pour Java passe de la version 17 à 21 ;
  • Mise à jour du langage Ruby 3.3 ;
  • Le langage PHP utilise la version 8.3 ;
  • La boîte à outils pour le machine learning PyTorch fait son entrée dans Fedora ;
  • Le paquet python-sqlalchemy utilise la nouvelle branche majeure 2.x du projet, le paquet python-sqlalchemy1.4 est proposé pour garder la compatibilité ;
  • La bibliothèque de validation des données Pydantic utilise dorénavant la version 2 ;
  • La bibliothèque Thread Building Blocks passe du fil 2020.3 au fil 2021.8 ;
  • La bibliothèque OpenSSL 1.1 est supprimée ne laissant que la dernière version de la branche 3.x ;
  • Les bibliothèques zlib et minizip utilisent leur variante zlib-ng et minizip-ng dorénavant ;
  • Le langage Python ne bénéficie plus de la version 3.7.

Projet Fedora

  • L'édition Cloud sera construite avec l'utilitaire Kiwi dans Koji ;
  • Tandis que l'édition Workstation aura son ISO générée avec l'outil Image Builder ;
  • L'image minimale ARM sera construite avec l'outil OSBuild ;
  • Fedora IoT bénéficiera d'images Bootable Containers ;
  • Il bénéficiera également des images Simplified Provisioning ;
  • Et le tout sera construit en utilisant rpm-ostree unified core ;
  • Fedora sera construit avec DNF 5 en interne ;
  • Les macros forge passent du paquet redhat-rpm-config à forge-srpm-macros ;
  • La construction des paquets échouera si l'éditeur de lien détecte certaines classes de vulnérabilité dans le binaire en construction ;
  • Phase 3 de l'usage généralisé des noms abrégés de licence provenant du projet SPDX pour la licence des paquets plutôt que des noms du projet Fedora ;
  • Clap de fin pour la construction des mises à jour au format Delta RPM ;
  • Suite du projet de ne générer les JDKs qu'une fois, et les rempaqueter ainsi à toutes les variantes du système ;
  • Compilation des paquets en convertissant plus d'avertissements comme erreurs lors de la compilation des projets avec le langage C ;
  • Les images immuables comme Silverblue seront nommées sous la dénomination Atomic pour éviter la référence au terme immuable qui est confus pour les utilisateurs.

Tester

Durant le développement d'une nouvelle version de Fedora Linux, comme cette version Beta, quasiment chaque semaine le projet propose des journées de tests. Le but est de tester pendant une journée une fonctionnalité précise comme le noyau, Fedora Silverblue, la mise à niveau, GNOME, l’internationalisation, etc. L'équipe d'assurance qualité élabore et propose une série de tests en général simples à exécuter. Suffit de les suivre et indiquer si le résultat est celui attendu. Dans le cas contraire, un rapport de bogue devra être ouvert pour permettre l'élaboration d'un correctif.

C'est très simple à suivre et requiert souvent peu de temps (15 minutes à une heure maximum) si vous avez une Beta exploitable sous la main.

Les tests à effectuer et les rapports sont à faire via la page suivante. J'annonce régulièrement sur mon blog quand une journée de tests est planifiée.

Si l'aventure vous intéresse, les images sont disponibles par Torrent ou via le site officiel.

Si vous avez déjà Fedora Linux 39 ou 38 sur votre machine, vous pouvez faire une mise à niveau vers la Beta. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.

Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.

En cas de bogue, n'oubliez pas de relire la documentation pour signaler les anomalies sur le BugZilla ou de contribuer à la traduction sur Weblate. N'oubliez pas de consulter les bogues déjà connus pour Fedora 40.

Bons tests à tous !

Announcing Fedora Linux 40 Beta

Posted by Fedora Magazine on March 26, 2024 02:00 PM

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 40 Beta, the next step towards our planned Fedora Linux 40 release at the end of April.

Get the the prerelease of any of our editions from our project website:

Or, try one of our many different desktop variants (like KDE Plasma, Xfce, or Cinnamon) from Fedora Linux Spins.

You can also update an existing system to the beta using DNF system-upgrade.

Beta release highlights

Some key things to try in this release!

PyTorch is a popular open-source machine learning framework. We want to make using this tool in Fedora Linux as easy as possible, and it’s now available for you to install with one easy command: sudo dnf install python3-torch

Note that for this release, we’ve only included CPU support, but this lays the groundwork for future updates with support for accelerators like GPUs and NPUs. For now, this is suitable for playing around with the technology, and possibly for some light inference loads.

Fedora IoT now uses ostree native containers, or “bootable containers”. This showcases the next generation of the ostree technology for operating system composition. Read more in the documentation from ostree and bootc.

Also on the immutable OS front, we’ve revived the “Atomic Desktop” brand for the growing collection of desktop spins based on ostree. An ever-expanding collection of obscure mineral names was fun, but hard to keep straight. We’re keeping well-known Silverblue and Kinoite, and other desktop environments will be, for example, Fedora Sway Atomic and Fedora Budgie Atomic.

Other notable updates

Fedora KDE Desktop now ships with Plasma 6, thanks to a lot of hard work from the Fedora KDE Special Interest Group and the upstream KDE project, is Wayland-only. (Don’t worry — X11-native apps will still run under Wayland.)

Fedora Workstation 40 Beta brings us GNOME 46. We’re bringing you Podman 5 for container management. The AMD ROCm accelerator framework is updated to version 6. And, we’ve got the updated language stacks you expect from a new release: LLVM 18 (that’s clang and friends), as well as GCC 14 (with newer glibc, binutils, and gdb).

There are many other changes big and small across the release. See the official Fedora Linux 40 Change Set for more, and check your favorite software for improvements — and, since this is a beta… possibly bugs!

Testing needed

As with any beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora Quality team via the test mailing list or in the #quality channel on Fedora Chat. As testing progresses, common issues are tracked in the “Common Issues” category on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

What is the beta release?

A beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

[Short Tip] Get all columns in a table

Posted by Roland Wolters on March 25, 2024 10:21 PM
<figure class="alignright size-thumbnail"></figure>

When working with larger data structures in Nushell, there are often tables that are wider than the terminal has width, resulting in some columns truncated, indicated by the three dots .... But how can we expand the dots?

❯ ls -la
╭───┬──────────────────┬──────┬────────┬──────────┬─────╮
│ # │ name │ type │ target │ readonly │ ... │
├───┼──────────────────┼──────┼────────┼──────────┼─────┤
│ 0 │ 213-3123-43432.p │ file │ │ false │ ... │
│ │ df │ │ │ │ │
│ 1 │ barcode-picture. │ file │ │ false │ ... │
│ │ jpg │ │ │ │ │
│ 2 │ print-me-by-tomo │ file │ │ false │ ... │
│ │ rrow.pdf │ │ │ │ │
╰───┴──────────────────┴──────┴────────┴──────────┴─────╯

The answer is simple, but surprisingly, not easily found. The “Working with tables” documentation of Nushell weirdly doesn’t tell, for example. The trick is to use the command columns to get a list of all column names:

❯ ls -la|columns
╭────┬───────────╮
│ 0 │ name │
│ 1 │ type │
│ 2 │ target │
│ 3 │ readonly │
│ 4 │ mode │
│ 5 │ num_links │
│ 6 │ inode │
│ 7 │ user │
│ 8 │ group │
│ 9 │ size │
│ 10 │ created │
│ 11 │ accessed │
│ 12 │ modified │
╰────┴───────────╯

And once you know that command, you can easily find the corresponding Nushell documentation: nushell.sh/commands/docs/columns.html

Fedora Ops Architect Weekly

Posted by Fedora Community Blog on March 25, 2024 04:33 PM

Hi folks, welcome to the weekly from your Fedora operations architect. This is an exciting week in the project as our Fedora Linux 40 Beta goes live tomorrow! Have a read on for more information.

Fedora Linux 40

Beta is GO!

Tomorrow, March 26th, our Fedora Linux 40 Beta will release, and I cannot thank our wonderful community enough for all the hard work they have been putting in the last few months to create it. When it lands, testing how the release behaves and filing bugs and posting fixes would be hugely appreciated as our Beta is what we will polish and refine for our official final release in a few weeks. You can learn how to and where to file a bug on our docs page.

Reminder: Final Freeze is due to start in one week – 2nd April 2024. Please try to prioritize F40 Beta testing and fixes this week in order to get any fixes submitted and applied before we enter the freeze period. This really helps our QA and release engineering teams on the far side of the freeze build and test our final release candidate compose(s) in good time to find any pesky bugs.

Save the Dates!

Flock to Fedora is returning this year from August 7th – 10th in Rochester, New York, USA and the call for proposals has officially opened! The deadline is April 21st and check out the blog post for more details on tracks, themes and venue details.Open Source Summit Europe has a call for proposals currently open – deadline is April 30th and the conference is set for September 14th – 18th in Vienna, Austria.The deadline for devconf.cz has now closed. Their schedule will be live towards the end of April, and the conference itself will take place from Thursday 13th – Saturday 15th June. The event is free to attend once you register for tickets, so keep an eye on their website for when registration becomes live.

Fedora Linux 41 Release

Fedora Linux 41 Changes

Announced Changes

Accepted Changes

Help Wanted

Lots of Test Days! Check them out on the QA calendar in fedocal for component-specific days. Help is always greatly appreciated.We also have some packages needing some new maintainers and others needing reviews. See below links to adopt and review packages!

The post Fedora Ops Architect Weekly appeared first on Fedora Community Blog.

Next Open NeuroFedora meeting: 25 March 1300 UTC

Posted by The NeuroFedora Blog on March 25, 2024 09:17 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 25 March at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date -d 'Monday, March 25, 2024 13:00 UTC'

The meeting will be chaired by @Penguinpee. The agenda for the meeting is:

We hope to see you there!

[Spanish] MTProxy on Fedora/CentOS Stream/RHEL

Posted by Álex Sáez on March 25, 2024 08:37 AM

While I usually for no specific reason write in English, this post is written in Spanish due to the absurd precautionary measure of blocking Telegram.


Si vais a usar Ubuntu, esta guia es genial, además incluye alguna cosa como registrar el proxy que yo he obviado a proposito.

Hay muchas formas de saltarse un bloqueo a Telegram. Desde cambiar los DNS, hasta usar una VPN pero, en mi opinión, la mejor es MTProxy. Aunque el proyecto parece parado desde hace unos años, todavía funciona y para salir de un apuro, es una solución ideal.

Todavía no se sabe como van a bloquear el uso de la plataforma (aunque viendo como se ha hecho hasta ahora este tipo de cosas, van a bloquear los dominios con casi total seguridad). Si este es el caso, yo personalmente uso NextDNS para bloquear ciertas páginas.

El bloqueo no va a suceder.

Sin embargo, cambiar los DNS no me parece una medida adecuada de cara a seguir usando Telegram. No siempre es factible cambiarlos. Algunos routers suministrados por proveedores de servidores de Internet no permiten hacer estas modificaciones.

¿Y una VPN? Esta solución es un poco drástica. Por funcionar, funciona. Pero, salvo que sepas lo que estas haciendo, estarías pasando todo el trafico del dispositivo por la VPN. E igual no es algo que te interesa. Igual no quieres estar saliendo por Francia las 24 horas del dia. O igual no puedes tener todos tus dispositivos en una VPN como por ejemplo el ordenador del trabajo. Además de que los servicios de VPN son de dudosa fiabilidad. Si vas a seguir esta ruta, mi recomendación es que te montes tu mismo la VPN, ya sea usando OpenVPN o WireGuard, o que uses un servicio por el que estás pagando y del que te fías. Yo personalmente uso la primera opción pero ProtonVPN me ha dado buen resultado en el pasado.

¿Por qué un proxy y en particular MTProxy? Pues bastante sencillo. Telegram soporta esto de manera nativa en todas las aplicaciones oficiales. Soporta SOCKS5 y MTProto y una vez tengas montado el servicio, puedes compartir el enlace con la gente que te importa. El tráfico de Telegram es lo único que va a través del proxy, no afectando en absoluto a la conexión del resto de aplicaciones del dispositivo.

Así que si tienes una máquina con Fedora, CentOS Stream 9 o RHEL 9 (puede que funcione con versiones anteriores pero no lo he probado), sigue estos pasos. Yo por el momento uso Linode y un servidor en Amsterdam. El proveedor de la maquina virtual es lo de menos. Asegúrate también de que sabes como proteger y mantener un equipo expuesto públicamente.

Se asume que la máquina esta limpia, si es una máquina que ya tenias, puede que los pasos del firewall te den problemas. Pero si ya tienes una máquina es que sabes lo que haces :)

Vamos a rodar todas las instrucciones como root y están muy basadas en lo que dice la propia documentación, con algunos pequeños cambios para ponerla al día. El servicio no va a estar rodando como root, tranquilo :P

Vamos a instalar las dependencias que necesitamos para compilar MTProxy.

dnf install openssl-devel zlib-devel
dnf groupinstall "Development tools"

Ahora necesitamos bajar el proyecto.

git clone https://github.com/TelegramMessenger/MTProxy
cd MTProxy

Hay un problema actualmente en MTProxy si intentamos compilarlo, hace un tiempo un usuario abrió una pull request con la solución, así que vamos a aplicar su parche:

curl -L -O https://patch-diff.githubusercontent.com/raw/TelegramMessenger/MTProxy/pull/531.patch
git am 531.patch
rm -f 531.patch

Para compilarlo, solo tenemos que hacer:

make

Puede que durante la compilación veas algunos warnings, y como cualquier buena instalación, vamos a obviarlos :P (En CentOS Stream 9 no los he visto pero si en Fedora 39, así que puede que se deba a diferentes ajustes en GCC, no le he dedicado mucho tiempo a esto).

En general, una aplicación instalada a mano suele ponerse en /opt y ahí es donde vamos a ponerla (lee man hier si tienes curiosidad).

mkdir /opt/MTProxy
cp objs/bin/mtproto-proxy /opt/MTProxy/
cd /opt/MTProxy/

Ahora tenemos que hacernos con el secreto y la configuración que nos da Telegram. La configuración puede cambiar así que aconsejan renovarla a diario (candidato ideal para un cron :))

curl -s https://core.telegram.org/getProxySecret -o proxy-secret
curl -s https://core.telegram.org/getProxyConfig -o proxy-multi.conf

El contenido de /opt/MTProxy debería ser: mtproto-proxy, proxy-multi.conf y proxy-secret.

Ahora necesitamos el secreto que usaremos para validar nuestros clientes contra nuestro servidor. Puede ser cualquier cosa que te inventes, pero lo que sugiere el proyecto es ideal: simplemente guarda en algún lado la salida de esta instrucción:

head -c 16 /dev/urandom | xxd -ps

Necesitamos asegurarnos de que el firewall que suele venir activado de serie no nos de problemas.

firewall-cmd --permanent --new-service=MTProxy
firewall-cmd --permanent --service=MTProxy --add-port=4242/tcp
firewall-cmd --permanent --add-service=MTProxy
firewall-cmd --reload

Si todo ha ido bien, deberías haber visto varios success. Pero siempre puedes comprobar que MTProxy esta listo en el el firewall con:

firewall-cmd --list-all | grep services

Aunque ya tenemos todo lo que necesitamos, la guinda en el pastel es configurar MTProxy como servicio que ruede como un usuario no privilegiado sin shell y sin home.

Primero creamos el usuario, y le damos permisos sobre /opt/MTProxy.

useradd -M -s /sbin/nologin mtproxy
chown -R mtproxy:mtproxy /opt/MTProxy

Ahora, necesitamos crear el servicio. Fíjate que necesitas modificar las lineas un poco para incluir el secreto que generamos antes de pegarlo en la terminal.

cat <<EOF > /etc/systemd/system/MTProxy.service
[Unit]
Description=MTProxy
After=network.target

[Service]
Type=simple
User=mtproxy
Group=mtproxy
WorkingDirectory=/opt/MTProxy
ExecStart=/opt/MTProxy/mtproto-proxy -H 4242 -S <EL SECRETO VA AQUI> --aes-pwd proxy-secret proxy-multi.conf --log mtproxy.log -M 1
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

Tras esto, ya estamos listos para rodar el servicio:

systemctl daemon-reload
systemctl enable MTProxy.service
systemctl start MTProxy.service
systemctl status MTProxy.service

Para configurarlo fácilmente, adapta el siguiente enlace, o compártelo a quien quieras:

https://t.me/proxy?server=<LA IP PUBLICA>&port=<EL PUERTO>&secret=<EL SECRETO>

Si encuentras algún fallo, por favor, dímelo para que pueda actualizarlo.

Why did I choose Fedora Server?

Posted by Fedora Magazine on March 25, 2024 08:00 AM

I thought it would be a good idea to share my experience implementing servers for personal use. It wasn’t easy to know the best fit for my workload and it has been a moving target, so it was critical to understand and update my needs before taking one route or another.

There are plenty of articles discussing which OS is more appropriate, some will warn against Fedora Server or even CentOS Stream regarding stability, but all it comes down to the use case. So the context is what makes the difference.

RHEL is predictable with Insights as a bonus

I started using a RHEL (developer license) to implement my services. At the time it was the obvious choice, because I needed a predictable package versioning. I implemented various services using PHP and Databases which needed a consistent versioning of dependencies.

RHEL also gave me the additional bonus of Insights which is a convenient tool to see CVE’s, patches, and other interesting data. But apart from the initial hype while learning its capabilities, I stopped using it almost completely because my server was always up to date, and there wasn’t anything to see in the dashboard. I concluded, therefore, that despite the potential of Insights, it wasn’t something I really needed.

CentOS Stream brings you upgrades ahead of time

RHEL versioning helps cases where staying in one minor version for a long time is paramount to keep applications working. But it wasn’t my case. I was always upgrading through minor versions as soon as they were available. So I looked at CentOS Stream as an appealing alternative. It would give me the same stability with the additional benefit of getting the upgrades ahead of time. I made the move and migrated to CentOS Stream.

I was reluctant to use containers, in those days, thinking that having my workload installed directly in the server was more efficient. The presumption was that I was running Apache, MariaDB, Postgress, PHP, etc. only once. But there is a caveat with this simplified view because some of those services fork different instances to support the various requests anyway.

Moving services into containers

A realization came when one of the services needed a newer PHP version which wasn’t available in the standard repo. So I reverted to Modules to install the newer version. Unfortunately, I encountered some issues with dependencies on EPEL that wouldn’t work with some of the newer PHP packages. Also, not all my services worked well with the latest PHP, and so on, and so forth. Long story short; I scraped all services and implement them as containers.

It was like being born again. Each service running in their own optimized environment happened to be the perfect solution and I just kept Apache Server as reverse proxy.

I run my server in the cloud so resources were pretty limited. Surprisingly, the CPU and RAM consumption didn’t jump as I thought they would meaning that I didn’t have to upgrade my cloud service plan.

Fedora Server turned out to be the best fit

Everything was good until I started using Quadlet to implement my containers. Quadlet is an amazing tool that replaces the deprecated podman-generate-systemd to create systemd units to handle containers’ lifecycle.

Quadlet is available starting with Podman 4.6, but it is limited to single containers. The implementation of Pods will be only available starting with Podman 5.0. The plan is to include Podman 5.0 in Fedora 40 which in turn will branch out CentOS Stream 10. This means that if I stay on CentOS Stream, I will need to wait approximately 8 months to enjoy this new feature.

I was also looking for DNF5, but unfortunately it didn’t make on time to Fedora 40. This means it will only be available on CentOS Stream 11 in another 4 years. Who knows what other cool upgrades I may be missing or will miss in the future.

After the move to CentOS Stream, I came to another realization. I didn’t need a server with a predictable package versioning anymore. So you see where I’m going. On one hand, I’m not getting any particular benefit running CentOS Stream (or RHEL), because all my workload is containerized. On the other hand, I’m missing the latest software to make my life easier and more enjoyable. So moving to Fedora Server is a no brainier.

Another factor I didn’t think of before is the upgraded workflow. Staying with CentOS Stream doesn’t guarantee a pathway between major versions, so it is likely that I will need to do a fresh install. Whereas using Fedora Server guarantees a pathway and workflow to move between major releases.

So, all and all, the change makes so much sense for my use case. And I’m assuming this is a common scenario.

Week 12 in Packit

Posted by Weekly status of Packit Team on March 25, 2024 12:00 AM

Week 12 (March 19th – March 25th)

  • Packit no longer shows status checks for not yet triggered manual tests. (packit-service#2375)
  • packit validate-config now checks whether upstream_project_url is set if pull_from_upstream job is configured. (packit#2254)
  • We have fixed an issue in %prep section processing. For instance, if the %patches macro appeared there, it would have been converted to %patch es, causing failure when executing %prep later. (specfile#356)

Episode 421 – CISA’s new SSDF attestation form

Posted by Josh Bressers on March 25, 2024 12:00 AM

Josh and Kurt talk about the new SSDF attestation form from CISA. The current form isn’t very complicated, and the SSDF has a lot of room for interpretation. But this is the start of something big. It’s going to take a long time to see big changes in supply chain security, but we’re confident they will come.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3345-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_421_CISA_new_SSDF_attestation_form.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_421_CISA_new_SSDF_attestation_form.mp3</audio>

Show Notes

Contribute at the Fedora Linux Test Week for Kernel 6.8

Posted by Fedora Magazine on March 23, 2024 05:14 PM

The kernel team is working on final integration for Linux kernel 6.8. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, March 24, 2024 to Sunday, March 31, 2024. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test week has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test week web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Happy testing, and we hope to see you on one of the test days.

Cloning Drives - Data Recovery with Open-Source Tools (part 5)

Posted by Steven Pritchard on March 22, 2024 01:06 PM

This is part 5 of a multi-part series. See part 1 for the beginning of the series.

Cloning hard drives with dd_rescue

In cases where a hard drive is failing, often simply cloning the drive is all that is required to recover data. There are many other situations where cloning a drive is important though, such as when attempting to recover from a broken partition table or major filesystem corruption.

The primary tool for cloning drives is called dd_rescue. Running dd_rescue -h or simply dd_rescue with no options will give you a summary of the various command-line options:

dd_rescue Version 1.14, garloff@suse.de, GNU GPL
 ($Id: dd_rescue.c,v 1.59 2007/08/26 13:42:44 garloff Exp $)
dd_rescue copies data from one file (or block device) to another.
USAGE: dd_rescue [options] infile outfile
Options: -s ipos start position in input file (default=0),
	     -S opos start position in output file (def=ipos),
	     -b softbs block size for copy operation (def=65536),
	     -B hardbs fallback block size in case of errs (def=512),
	     -e maxerr exit after maxerr errors (def=0=infinite),
	     -m maxxfer maximum amount of data to be transfered (def=0=inf),
	     -y syncfrq frequency of fsync calls on outfile (def=512*softbs),
	     -l logfile name of a file to log errors and summary to (def=""),
	     -o bbfile name of a file to log bad blocks numbers (def=""),
	     -r reverse direction copy (def=forward),
	     -t truncate output file (def=no),
	     -d/D use O_DIRECT for input/output (def=no),
	     -w abort on Write errors (def=no),
	     -a spArse file writing (def=no),
	     -A Always write blocks, zeroed if err (def=no),
	     -i interactive: ask before overwriting data (def=no),
	     -f force: skip some sanity checks (def=no),
	     -p preserve: preserve ownership / perms (def=no),
	     -q quiet operation,
	     -v verbose operation,
	     -V display version and exit,
	     -h display this help and exit.
Note: Sizes may be given in units b(=512), k(=1024), M(=1024^2) or G(1024^3) bytes
This program is useful to rescue data in case of I/O errors, because
 it does not necessarily abort or truncate the output.

Note that there is also a GNU ddrescue with a similar feature set, but with entirely incompatible command-line arguments.

In the simplest of cases, dd_rescue can be used to copy infile (let's say, for example, /dev/sda) to outfile (again, for example, /dev/sdb).

dd_rescue /dev/sda /dev/sdb

In most cases, you'll want a little more control over how dd_rescue behaves though. For example, to clone failing /dev/sda to /dev/sdb:

dd_rescue -d -D -B 4k /dev/sda /dev/sdb

(to use the default 64k block size) or, for really bad drives, to force only one read attempt:

dd_rescue -d -D -B 4k -b 4k /dev/sda /dev/sdb

Adding the -r option to read backwards also helps sometimes.

Changing block sizes

By default, dd_rescue uses a block size of 64k (overridden with -b). In the event of a read error, it tries to read again in 512-byte chunks (overridden with -B). If a drive is good (or only beginning to fail), a larger block size (usually in the 512kB-1MB range) will give you significantly better performance.

If a drive is failing, forcing the default block size to the same value as the fall-back size will keep dd_rescue from re-reading (and therefore possibly damaging) failed blocks.

Direct I/O

The -d and -D options turn on direct I/O for the input and output files respectively. Direct I/O turns off all OS caching, both read-ahead and write-behind. This is much more efficient (and safer) when reading from and writing to hard drives, but should generally be avoided when using regular files.

Other useful options

-r        Read backwards. Sometimes works more reliably. (Very handy trick...)

-s num    Start position in input file.

-S num    Start position in output file. (Defaults to the same as -s.)

-e num    Stop after num errors.

-m num    Maximum amount of data to read.

-l file   Write a log to file.

Copying partitions

Let's say you have a drive with a MS-DOS partition table.  The drive has two partitions.  The first is a NTFS partition that seems to be intact.  The second partition is an unknown type.  Rather than copying every block using dd_rescue, you want to copy only the blocks that are in use to a drive that is the same size.

To do this, first copy the boot sector and partition table from /dev/sda to /dev/sdb using dd:

dd if=/dev/sda of=/dev/sdb count=1

The default block size of dd is 512 bytes, which, conveniently, is the size of boot sector + partition table at the beginning of the drive.

Note: This trick doesn't quite work on MS-DOS partition tables with extended partitions! In that case, use sfdisk to copy the partition table (after running the above command to pick up the boot sector):

sfdisk -d /dev/sda | sfdisk /dev/sdb

Next, re-read the partition table on /dev/sdb using hdparm:

hdparm -z /dev/sdb

Next we can clone the NTFS filesystem on /dev/sda1 to /dev/sdb1 using the ntfsclone command from ntfsprogs:

ntfsclone --rescue -O /dev/sdb1 /dev/sda1

Finally we clone /dev/sda2 to /dev/sdb2 using dd_rescue using a 1MB block size (for speed):

dd_rescue -d -D -B 4k -b 1M /dev/sda2 /dev/sdb2

To be continued in part 6.