Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 0 Posts
  • 332 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle


  • I care less about speakerphone than I do Bluetooth headsets or regular phone speaker use near me.

    The speakerphone makes more noise!

    Yes, but people already have conversations between each other in public where we can hear both sides. We train ourselves to tune those out. A speakerphone is analogous to that case of another human talking.

    What I find most disruptive about phone conversations near me versus listening to two other people talking (which I can tune out) is that the speech pattern of a phone user is to say something and then pause. The problem is that that is exactly the signal that someone has said something to you, and that your attention is required. I have a harder time ignoring those one-sided conversations than turning out a conversation where I can hear both sides, because it’s basically constantly giving my head the “you just missed something and need to respond” signal. It’s like when someone says something to you, waits for a few seconds, and then your attention gets triggered and you look up and say “what?”

    Now, the article does also reference someone turning a speakerphone way up, and that I can get, if you’re playing it louder than a human would speak. But that’s also kinda a special case.

    I think that in general, the best practice is to text, and I think that most would agree that that’s uncontroversially the best approach in public. But after that, I’d personally prefer to have speakerphone use, above headset or regular phone use.

    EDIT: One interesting approach — I mean, smartphone vendors would always like to have new reasons to sell more hardware, so if they can figure out how to make it work, they might jump on it — might be phones capable of picking up subvocalization.

    https://en.wikipedia.org/wiki/Subvocalization

    Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read.[1][2] This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.[3]

    This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading.[3]

    You’d probably also need some sort of speech synthesizer rig capable of converting that into speech.

    A conversation where someone’s using headphones/earbuds and a subvocalization-pickup phone would avoid some of the limitations of texting (not limited to text input speed on an on-screen keyboard or having to look at the display), provide for more privacy for phone users, and not add to sound pollution affecting other people in the environment.

    EDIT2: Other possibilities for the speaker side:

    Bone conduction

    This has actually been done, but has some limitations on the sound it can produce, and you need to have a device in contact with your head.

    https://en.wikipedia.org/wiki/Bone_conduction

    Bone conduction is the conduction of sound to the inner ear primarily through the bones of the skull, allowing the hearer to perceive audio content even if the ear canal is blocked. Bone conduction transmission occurs constantly as sound waves vibrate bone, specifically the bones in the skull, although it is hard for the average individual to distinguish sound being conveyed through the bone as opposed to the sound being conveyed through the air via the ear canal. Intentional transmission of sound through bone can be used with individuals with normal hearing—as with bone-conduction headphones—or as a treatment option for certain types of hearing impairment. Bones are generally more effective at transmitting lower-frequency sounds compared to higher-frequency sounds.

    The Google Glass device employs bone conduction technology for the relay of information to the user through a transducer that sits beside the user’s ear. The use of bone conduction means that any vocal content that is received by the Glass user is nearly inaudible to outsiders.[47]

    Phase-array speakers to produce directional sound

    Here, you need to have the device track its position and orientation relative to a given user’s ears, then have a phase array of speakers that each play the sound at just the right phase offset to produce constructive interference in the direction of the user’s ears — it’s beamforming with sound. Other users will have a hard time hearing the sound, which will be garbled and quieter, because of destructive interference in their direction.

    https://en.wikipedia.org/wiki/Beamforming

    Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception.[1] This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.

    We more-frequently use this for reception than for transmission, with microphone arrays, but you can make use of it for transmission. You’ll need a minimum number of speakers in the array to be able to play beams of sound with constructive interference in the direction of a given number of listeners.


  • I don’t presently need to use any service that requires use of a smartphone. I’ve never had a smartphone tied to a Google/Apple account. I don’t even think that I currently have any apps from the Google Store on my phone — just open-source F-Droid stuff.

    It’s true that hypothetically, you could depend on a service that does require you to use an Android or iOS app to make use of it. There are services that do require that there. Lyft, for example, looks like it requires use of an app, though Uber doesn’t appear to do so. And I can’t speak as to your specific situation, but at least where I am, in the US, I’ve never needed to use an Android or iOS app to make use of some class of service.

    But I will say that services will track what people use, and if people are continuing to use other interfaces than smartphone apps to make use of their services, that makes it more likely that that’s what they’ll provide.

    I can’t promise that somewhere in the world, or in some country or city or specific place, someone might be required to use an Android or iOS app, or if not now, down the line, and not have an alternative. They can, at least, limit their use to that app, rather than using it more-broadly. I don’t make zero use of my smartphone software now — like, when I’m driving, I’ll use the open-source OSMAnd to navigate. I sometimes check for Lemmy updates when waiting in line or similar. I don’t normally listen to music while just walking around, but if I did, I’d use a music player on the phone rather than a laptop for it. But I try to shift my usage to the laptop as much as is practical.


  • I don’t intend to get rid of my smartphone, but I do carry a larger device with me, and try to use the phone increasingly as just a dumbphone and cell modem for that device to tether to.

    That may not be viable for everyone — it’s not a great solution to “I’m standing in line and want to use a small device one-handed”. And iOS/Android smartphones are heavily optimized to use very little power, and any other devices mean more power. It probably means carrying a larger case/bag/backpack of some sort with you. And most phone software is designed to know about and be aware of cell network constraints, like acting differently based on whether you’re connected to a cell network for data or a WiFi network for data.

    However, it doesn’t require shifting to a new phone ecosystem. It also makes any such future transition easier — if I have a lot of experience tied up in Android/iOS smartphone software, then there’s a fair bit of lock-in, since shifting to another platform means throwing out a lot of experience in that phone software. If my phone is just a dumbphone and a cell modem, then it’s pretty easy to switch.

    And it’s got some other pleasant perks. Phone OSes tend to be relatively-limited environments. They’re fine for content consumption, like watching YouTube or something, but they’re considerably less-capable in a wide range of software areas than desktop OSes. A smartphone has limited cooling; laptops are significantly more-able to deal with heat. Due to very limited physical space, smartphones usually have very few external connectors — you probably get only a single USB-C connector, and no on-phone headphones jack. You’re probably looking at a USB hub or adapters and rigging up pass-through power if you want anything else. Laptops normally have a variety of USB connectors, a headphones jack, maybe a wired Ethernet connector, maybe an external display jack. Laptops tend to have a larger battery, so it’s reasonable to use the laptop to power external devices like trackballs/larger trackpads, keyboards, etc. You get a larger display, so you don’t have to deal with the workarounds that smartphones have to do to make their small screens as usable as possible. You don’t have to deal with the space constraints that make a touchscreen necessary, having your fingers in front of whatever you’re looking at (though you can get larger devices that do have touchscreens, if you want). You have far more choices on hardware, and that hardware is more-customizable (in part because the hardware likely isn’t an SoC, though you can get an SoC-based laptop if you want). Software support isn’t a smartphone-style “N years, tied to the phone hardware vendor, at which point you either use insecure software or throw the phone out and buy a new one”.


  • Yeah, there’s some nuclear power plant here in the US that uses sewage for cooling. It’s out in the middle of the desert, Arizona or New Mexico or something, somewhere where it’d be a pain to bring in a bunch more water.

    searches

    Arizona.

    https://en.wikipedia.org/wiki/Palo_Verde_Nuclear_Generating_Station

    The Palo Verde Generating Station is a nuclear power plant located near Tonopah, Arizona[5] about 45 miles (72 km) west of downtown Phoenix. Palo Verde generates the second most electricity of any power plant in the United States per year, and is the second largest power plant by net generation as of 2021.[6] Palo Verde has the third-highest rated capacity of any U.S power plant. It is a critical asset to the Southwest, generating approximately 32 million megawatt-hours annually.

    At its location in the Arizona desert, Palo Verde is the only nuclear generating facility in the world that is not located adjacent to a large body of above-ground water. The facility evaporates water from the treated sewage of several nearby municipalities to meet its cooling needs. Up to 26 billion US gallons (~100,000,000 m³) of treated water are evaporated each year.[12][13] This water represents about 25% of the annual overdraft of the Arizona Department of Water Resources Phoenix Active Management Area.[14] At the nuclear plant site, the wastewater is further treated and stored in an 85-acre (34 ha) reservoir and a 45-acre (18 ha) reservoir for use in the plant’s wet cooling towers.


  • New York City is a port city. It has an effectively infinite supply of salt water, which you can use for evaporative cooling, albeit with some extra complications.

    EDIT: Hell, you can use the waste energy from an evaporative cooler to drive a distiller to generate fresh water from some of the evaporated salt water, if you want. Microsoft is doing that combined datacenter-nuclear-power-plant thing. IIRC, if I’m not combining two different cases of an AI datacenter using full output of a power plant, they have the entire output of a nuclear power plant never touching the grid (and thus avoiding any transmission cost overhead and as a bonus, avoiding regulatory requirements attached to transmission and distribution from power generation):

    https://arstechnica.com/ai/2024/09/re-opened-three-mile-island-will-power-ai-data-centers-under-new-deal/

    Re-opened Three Mile Island will power AI data centers under new deal

    Microsoft would claim all of the nuclear plant’s power generation for at least 20 years.

    From past reading, desalination from reverse osmosis has wound up being somewhat cheaper than via using distillation, but combined generation-distillation using waste heat is a thing. IIRC Spain has some company that does combined generation-distillation facilities.

    And in a case like that, you have the waste heat from generation and the waste heat from use all in one spot, so you’ve got a lot of water vapor to condense.

    EDIT2: Yeah, apparently distillation used to be ahead for desalination, but reverse osmosis processes improved, and currently hold the lead:

    https://www.sciencedirect.com/science/article/pii/S1359431124026292

    As desalination is a process of removing dissolved solids such as salts and minerals from water, there are two main types of technology commonly used in the industry: thermal-based and membrane-based [22]. The thermal-based desalination processes, such as multi-stage flash distillation (MSF) and multiple-effect distillation (MED) were once predominantly used in the water sector until membrane-based desalination technology, such as reverse osmosis (RO), matured and offered lower operating costs [23]. Hence, RO is the most used desalination process today, producing between 61 % and 69 % of the total global desalinated water, followed by MSF (between 17 % and 26 %) and MED (between 7 % and 8 %) [9], [16], [19], [20], [24].


  • Well, cool. Hope it was helpful, then.

    I’ll also mention one other point, if you’re a big emacs and Firefox user. Won’t solve the issue for URL bars or other non-webpage text, but if you do a fair bit of writing in HTML textareas in webpages, like on Lemmy instances or something, you can hand that off to emacs.

    • Install the edit-server package for emacs (M-x list-packages, wait for the emacs package manager to load the list, go to edit-server, hit “i” to flag for install and “x” to execute, or M-x package-install and just type out “edit-server”).

    • In an emacs instance run M-x edit-server-start (or set it up to always run automatically at emacs startup but I run multiple emacs instances).

    • Grab the Edit with Emacs Firefox addon. Install.

    Now, by default all textareas will have a little blue button at the bottom reading “edit”. Click it, and your textarea will open up in emacs. C-c C-c to commit changes back to the textarea (Or C-x C-c, if you’re exiting that instance of emacs). You can also right-click on the textarea and choose “Edit with Emacs”.


  • I have, in the past, recommended, if you use Unix systems in a technical way, knowing at least how to do the following in vi (and I use emacs):

    • Close the program, discarding changes. From vim’s command mode, : q ! RET.

    • Exit writing changes. From vim’s command mode, : w q RET.

    • Move the cursor around. Today, usually you can get by with arrow keys – I haven’t been on a system where one thing or another was dicked up in a way that rendered arrow keys unusable in many years, but from Vim’s command mode, “h”, “j”, “k”, and ""l.

    • Enter insert mode to Insert text. From vim’s command mode, “i”.

    • Exit insert mode. From vim’s insert mode, ESC.

    • Search for text. From vim’s command mode, “/”, the text to search for, and RET.

    • Replace text. From vim’s command mode, “: %s/foo/bar/g” to change all instances of foo to bar in a given file.

    If you’ve got that much and you ever find yourself on a system that only has vi available (and it may not be vim), you can at least do the basics.

    But the widespread deployment of nano has made learning basic vi less important than was once the case. Even very small systems that I’ve run into tend to have nano.

    Note that busybox, a popular statically-linked shell often used in a rescue-the-horribly-broken-system scenario, does not have nano but does have a minimal “vi”-alike, so you might still want to know vi in that case.


  • If you’re specifically wanting to fiddle the Firefox keybinding the way I have (which may be too much effort for some people to deal with re-learning), the setting is:

    • Go change the GTK setting for the appropriate version of GTK. On my Debian trixie system, Firefox is using GTK 3:

        $ apt depends firefox|grep gtk
        
        WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
        
        Depends: libgtk-3-0t64 (>= 3.13.7)
        $
      

      I don’t know whether it will change to GTK 4 anytime soon.

    • Go to about:config in the Firefox URL bar.

    • Set ui.key.menuAccessKey to 0. Normally, IIRC — it’s been many years that I’ve had this set, so I don’t even really remember the original behavior for certain — Firefox uses Alt in sorta the same way Windows does, to open menus. I don’t like that, and this disables that functionality entirely to free up Alt.

    • Set ui.key.accelKey to 18. This makes the now-free Alt act like Control does as regards Firefox keyboard shortcuts.

    • I also have ui.key.contentAccess set to 0. It’s been many years since I’ve set this, but…let’s see. goes to look That disables the contentAccess key, prevents webpages from grabbing Shift-Alt-key sequences, which can also collide. Like, in the Lemmy HTML textarea that I’m currently writing this in, I can hit Shift-Alt-B and Shift-Alt-F to move the cursor forward and backward by a word while selecting text, the way I could in emacs (or, in emacs parlance, move point while extending the region).

    The problem is that now you’ve got Firefox, where things like “Alt-A” will select all text, and if you use other GUI apps, other apps, which may be using “Control-A” for that, so you gotta train your muscles to deal with it. I haven’t thought about it in many years, as it’s automatic now, but I remember it being super-obnoxious when I started.

    For me, it was worth it, because I use emacs all the time and Firefox all the time, and rarely use other graphical apps, so it reduced the amount of switching. But…depends on what someone’s particular situation is, whether that makes sense for them.

    EDIT: Had accidently written ui.key.contentAccess twice. Corrected the first instance to be ui.key.menuAccessKey.


  • It was originally pico, which IIRC was bundled with the pine email client (a “tree name” pun off elm, an older email client, whose name came from “ELectronic Mail”; this pun extended to many other Unix email clients, like mulberry and such). I think that “pico” probably stood for something like “PIne COmposer”. Because it was designed to be particularly approachable, listed the basic commands at the bottom of the screen, and pine was installed on a lot of systems, it kind of got adopted as the “Unix notepad for the terminal” — a simple editor that most users could use for lightweight tasks. Then IIRC due to pine predating standard open-source licenses, the nano clone was created to be GPL.

    searches

    https://en.wikipedia.org/wiki/Pico_(text_editor)

    Yeah, it’s “pine composer”, it was indeed bundled with pine. And apparently part of the problem was that the license didn’t fully spell out the conditions under which it could be redistributed.


  • I am even starting to miss my emacs keybinds when not using emacs (especially ctrl-k for killing from your cursor position to the end of the line ctrl-a for jumping to the beginning of a line and ctrl-e for jumping to the end of a line)

    A number of software packages permit use of basic emacs keybindings.

    It’s the default in bash, which uses readline. If someone is a vi user, they can enable vi keystrokes in software that uses readline with editing-mode vi in their ~/.inputrc.

    For GTK-based apps, looking on my system:

    GTK 1: in ~/.gtkrc:

    gtk-key-theme-name = "Emacs"
    

    GTK 2: in ~/.gtkrc-2.0:

    gtk-key-theme-name = "Emacs"
    

    GTK 3: in ~/.config/gtk-3.0/settings.ini

    gtk-key-theme-name="Emacs"
    

    GTK 4 apparently can’t do this.

    Note that this can collide with other keybindings that a given GTK app has assigned to it. I moved the standard Firefox modifier key from Control to Alt to reduce the impact of that on my system. That was a little obnoxious to get worked into muscle memory, but I did ultimately manage it.



  • Depends pretty wildly on what you like.

    Some things that I do:

    I never want automatic locking — I always lock my machine manually when leaving it, with Super-\ (Super normally being the Windows key). I also want my monitor to power off after a few seconds in that mode, and then wake back up if I start typing. I also want to use a black screen rather than swaylock’s default white.

    in my ~/.config/sway/config:

    set $mod Mod4
    
    # Handle session-locking triggered by stuff like loginctl lock-session
    exec_always swayidle -w \
                 lock 'myscreensaver.sh'
    
    # Bind Super-backslash to trigger a session lock
    bindsym $mod+backslash exec "loginctl lock-session""
    

    in ~/bin/myscreensaver.sh:

    #!/bin/bash
    # Script that handles things necessary to "lock" the system when I'm away
    
    # Pause any music that mpd is playing
    mpc pause
    
    swayidle \
        timeout 5 'swaymsg "output * dpms off"' \
        resume 'swaymsg "output * dpms on"' &
    swaylock -c 000000
    kill %1
    

    By default, X11 (and Wayland, apparently, though I’ve spent less time looking at Wayland’s APIs) don’t “store” clipboard state – they just facilitate transferring copied data from one application to another application where it’s being pasted. This means that if one copies data in one application and then closes that application, the clipboard contents go away. I don’t really like this behavior. One way to avoid it is to use a “clipboard manager” — a piece of software that saves a copy of the data itself. If you install clipman and wl-clipboard, you can do this:

    In ~/.config/sway/config:

    # Make clipboard persist after application termination
    exec_always flock -n $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY-wl-paste wl-paste --watch clipman store
    

    I want to get notifications when my laptop battery is low. Install poweralertd, and then in ~/.config/sway/config:

    # Power notification support
    exec_always flock -n $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY-poweralertd poweralertd
    

    I don’t use it that much, but Sway has a “scratchpad”, where one can stuff a window that one is working with. With this, I send the current window there with Super-Shift-minus then make it pop back up as a floating window with Super-minus:

    # Scratchpad
    bindsym $mod+shift+minus move window to scratchpad
    bindsym $mod+minus scratchpad show
    

    I have a mute button on my laptop’s keyboard. I make that mute the default PipeWire sound sink. In ~/.config/sway/config:

    bindsym XF86AudioMute exec "wpctl set-mute @DEFAULT_AUDIO_SINK@ toggle"
    

    I swap Caps Lock and left Control. This is how traditional Unix keyboards worked, and is generally more-friendly if you are using the Control key a lot more than Caps Lock. I’d consider this almost a precondition to use emacs without making your left pinky miserable. In ~/.config/sway/config:

    input "type:keyboard" {
          xkb_options ctrl:swapcaps,compose:menu,compose:ralt
    }
    

    That also sets my menu key (on my desktop) and right Alt key (on my laptop, which doesn’t have a menu key) to be Compose. That way I can do things like type “ö” by hitting Compose o and then the double-quote key, or an em-dash (“—”) with Compose hyphen hyphen hyphen.

    I don’t like keeping my mouse pointer visible unless I’m actually moving it. In my ~/.config/sway/config, this will hide it after three seconds:

    # Hide mouse cursor after a period of inactivity.
    seat seat0 hide_cursor 3000
    

    I like to have a way to, using the keyboard, hide the latest notification shown by swaync, the default Sway notification manager. In ~/.config/sway/config:

    bindsym $mod+grave exec "swaync-client --hide-latest"
    

    Then Super-backtick will hide the most recent notification to show up.


  • One thing that I’ve found that I like is having my waybar not normally visible. When I need to glance at the bar, I can just hold the Sway modifier key to make it temporarily show up.

    in ~/.config/sway/config:

    bar {
        swaybar_command waybar
        mode hide
    }
    

    More screen space for whatever I’m actually doing. If something is so critical that it needs to grab my attention (e.g. battery low on a laptop) then I have it set up to send a message to the notification manager.




  • Note that on current, systemd-based systems, one probably wants sudo journalctl -kb to show kernel messages for the current boot.

    dmesg reads from the in-memory kernel ring buffer. That can be desirable in some cases, but as the name suggests, the “ring buffer” is a “ring” — it has a finite amount of space and eventually, the old stuff gets overwritten by the new stuff. The idea is that a userspace logging daemon, like journald (or syslogd on most older systems) has time to pull the data from the ring buffer out to (potentially much larger) log files on disk.

    journalctl will also post-process the output to do things like convert the time-from-boot to a wall clock time, which is generally more useful for correlating with other things.



  • If you don’t want it, you can hide it — most distros have some way to just show a splash screen that hides it. I always unhide it, as I can see hints of things going wrong.

    The messages are from a wide variety of kernel subsystems (and, later in the boot process, daemons) and most people aren’t going to be familiar with everything. I could tell you what a lot of lines mean, but there are always new ones showing up as new software is written.

    They’re more-likely to be useful if something breaks and then you go examine a specific, suspicious-looking message and learn what it means. You probably won’t be constantly trying to stay up on all kernel subsystems.