Monday, April 18, 2016

libinput and graphics tablet pad support

When we released graphics tablet support in libinput earlier this year, only tablet tools were supported. So while you could use the pen normally, the buttons, rings and strips on the physical tablet itself (the "pad") weren't detected by libinput and did not work. I have now merged the patches for pad support into libinput.

The reason for the delay was simple: we wanted to get it right [1]. Pads have a couple of properties that tools don't have and we always considered pads to be different to pens and initially focused on a more generic interface (the "buttonset" interface) to accommodate for those. After some coding, we have now arrived at a tablet pad-specific interface instead. This post is a high-level overview of the new tablet pad interface and how we intend it do be used.

The basic sign that a pad is present is when a device has the tablet pad capability. Unlike tools, pads don't have proximity events, they are always considered in proximity and it is up to the compositor to handle the focus accordingly. In most cases, this means tying it to the keyboard focus. Usually a pad is available as soon as a tablet is plugged in, but note that the Wacom ExpressKey Remote (EKR) is a separate, wireless device and may be connected after the physical pad. It is up to the compositor to link the EKR with the correct tablet (if there is more than one).

Pads have three sources of events: buttons, rings and strips. Rings and strips are touch-sensitive surfaces and provide absolute values - rings in degrees, strips in normalized [0.0, 1.0] coordinates. Similar to pointer axis sources we provide a source notification. If that source is "finger", then we send a terminating out-of-range event so that the caller can trigger things like kinetic scrolling.

Buttons on a pad are ... different. libinput usually re-uses the Linux kernel's include/input.h event codes for buttons and keys. But for the pad we decided to use plain sequential button numbering, starting at index 0. So rather than a semantic code like BTN_LEFT, you'd simply get a button 0 event. The reasoning behind this is a caveat in the kernel evdev API: event codes have semantic meaning (e.g. BTN_LEFT) but buttons on a tablet pad don't those meanings. There are some generic event ranges (e.g. BTN_0 through to BTN_9) and the Wacom tablets use those but once you have more than 10 buttons you leak into other ranges. The ranges are simply too narrow so we end up with seemingly different buttons even though all buttons are effectively the same. libinput's pad support undoes that split and combines the buttons into a simple sequential range and leaves any semantic mapping of buttons to the caller. Together with libwacom which describes the location of the buttons a caller can get a relatively good idea of how the layout looks like.

Mode switching is a commonly expected feature on tablet. One button is designated as mode switch button and toggles all other buttons between the available modes. On the Intuos Pro series tablets, that button is usually the button inside the ring. Button mapping and thus mode switching is however a feature we leave up to the caller, if you're working on a compositor you will have to implemented mode switching there.

Other than that, pad support is relatively simple and straightforward and should not cause any big troubles.

[1] or at least less wrong than in the past
[2] They're actually linux/input-event-codes.h in recent kernels

Thursday, April 7, 2016

Why libinput doesn't have a lot of config options

Most days, at least one of the bugs I deal with requests something along the lines of "just add $FOO as a config option". In this post, I'll explain why this is usually a bad solution. First, read http://www.islinuxaboutchoice.com/ and keep those arguments in mind. Generally, there are two groups of configuration options - hardware options and user options. Hardware options are those that deal with specific quirks needed on some hardware, but not on other hardware. User options are those that deal with user preferences such as tapping or two-finger vs. edge scrolling.

In the old synaptics driver, we added options whenever something new came up and we tried to make those options generic. This was a big mistake. The driver now has over 70 configuration options resulting in a test matrix with a googolplex of combinations. In other words, it's completely untestable. To make a device work users often have to find the right combination of options from somewhere, write out a root-owned config file and then hope this works. Why do we still think this is acceptable? Even worse: some options are very specific to hardware but still spread in user forum examples like an STD during spring break.

In libinput, we're having none of that. When hardware doesn't work we expect a user to file a bug, we get it fixed upstream for the specific model and thus automatically fix it for all users of that device. We're leaning heavily on udev's hwdb which we have extended to correct devices when the firmware announces wrong information. This has the advantage that there is only one authoritative source of quirks a device needs to work. And we can update this as time goes by without having to worry about stale configuration options. One good example here is the custom acceleration profile that Lenovo X230 touchpads have in libinput. All in all, there is little pushback for the lack of hardware-specific configuration options and most users are fine with it once they accept the initial waiting period to get the patch into their distribution.

User-specific options are more contentious. In our opinion, some features should be configurable and others should not. Where to draw that line is of course quite undefined. For example, tapping on or off was one of the first configuration options available and that was never a cause for arguments either way (except whether the default should be on or off). Other options are more contentious. Clickpad software buttons are always on the bottom edge and their size is hardcoded (synaptics allowed almost free positioning of those buttons). Other features such as changing a two-finger tap to some other button event is not supported at all in libinput. This effectively comes down to cost. You see, whenever you write "it's just 5 lines of code to make this an option", what I think is "once the patch is reviewed and applied, I'll spend two days to write test cases and documentation. I'll need to handle any bug reports related to this, and I'm expected to make sure this option works indefinitely. Any addition of another feature may conflict with this option, so I need to make sure the right combination is possible and test cases are written." So your work ends after writing a 5 line patch, my work as maintainer merely starts. And unless it pays off long-term, the effort is not worth it. Some features make that cut, others don't if they are too much of a niche feature.

All this is of course nothing new and every software project needs to make these decisions. Input isn't even a special case here, it pales in comparison with e.g. the decisions UI designers need to make. However, in FOSS we have a tendency to think that because something is possible, it should be done. Legally, you have freedom to do almost anything with the software, so you can maintain a local fork of libinput with that extra feature applied. If that isn't acceptable, why would it be acceptable to merge the patch and expect others to shoulder the costs?

Wednesday, April 6, 2016

libinput now has a touchpad software middle button

I just pushed a patch to libinput master to enable a middle button on the clickpad software buttons. Until now, our stance was that clickpads only get a left and right software button, split at the 50% mark. The reasoning is simple: touchpads only have markings for left and right buttons (if any!) and the middle button's extents are not easily discoverable if there is no visual or haptic feedback. A middle button event could however be triggered through middle button emulation, i.e. by clicking the touchpad with a finger on the left and right software button area (see the instructions here).

This is nice in theory but, as usual, reality gets in the way. Most interactions with the middle button are quick and short-lived, i.e. clicking the button once to paste. This interaction is what many touchpads are spectacularly bad at. For middle button emulation to be handled correctly, both fingers must be registered before the physical button press. The scanout rate on a touchpad is often too low and on touchpads with extremely light resistance like the T440 it's common to register the physical click before we know that there's a second finger on the touchpad. But even on a T450 and an X220 with much higher clickpad resistance I barely managed to get above 7 out of 10 correctly registered middle button events. That is simply not good enough.

So the patch I just pushed out to master enables a middle software button between the left and the right button. The exact width of the button scales with the touchpad but it's usually around 20-25mm and it's centered on the touchpad so despite the lack of visual or haptic guides it should be reliable to hit. The new behaviour is hard-coded and for now middle button emulation continues to work on touchpads. In the future, I expect I will remove middle button emulation on touchpads or at least disable it by default.

The middle button will be available in libinput 1.3.