tag:blogger.com,1999:blog-6112936277054198647.post8562357863670206668..comments2024-03-12T00:42:06.642+10:00Comments on Who-T: Thoughts on Linux multitouchPeter Huttererhttp://www.blogger.com/profile/17204066043271384535noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-6112936277054198647.post-69630447240026691192010-11-16T11:07:49.701+10:002010-11-16T11:07:49.701+10:00It's hard to imagine why MPX is not enough to ...It's hard to imagine why MPX is not enough to support multitouch or even MS Kinetix? Assuming MPX is implemented in 3 Dimentional coordinates (Z as the strength of a touch)Unknownhttps://www.blogger.com/profile/07536636544833288076noreply@blogger.comtag:blogger.com,1999:blog-6112936277054198647.post-47527053431503571482010-10-07T23:45:53.042+10:002010-10-07T23:45:53.042+10:00Hi,
Great post, thanks for summarizing the state ...Hi,<br /><br />Great post, thanks for summarizing the state of the art. It would be helpful if you could elaborate a bit on the Ubuntu gesture work. If not an X gesture extension, how do events get delivered to applications? I understand the different parts of the utouch stuff -- geis, grail -- but not how it all fits together for getting gestures from evdev through the X server to applications.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6112936277054198647.post-67427616376162713902010-10-07T15:59:07.062+10:002010-10-07T15:59:07.062+10:00one other option you missed in the first example, ...one other option you missed in the first example, the user may want to mix red and green, with the amount of each color being how long each of the two touches wasUnknownhttps://www.blogger.com/profile/12084309137541367977noreply@blogger.comtag:blogger.com,1999:blog-6112936277054198647.post-72957235433918138532010-10-06T07:54:55.038+10:002010-10-06T07:54:55.038+10:00@Sergio:
there are no specs that I know of, at lea...@Sergio:<br />there are no specs that I know of, at least not in regards to low-level MT support. For gestures there's a quite bit of literature out there.<br /><br />@Justin:<br />Unfortunately, HW that can identify fingers is quite far on the horizon. Identifying objects is relatively simple compared to identifying fingers.<br /><br />@Michael:<br />No final decision yet but it will likely be that a touch will never be sent to a different window than the one underneath, unless there's a specific grab. Coincidentally, this seems to be the MS Surface behaviour too.<br /><br />The main problem with an area to grab is that especially given the fat finger problem, it's really hard to make this work properly. As with many other things, it's something that needs to be addressed in the UI design to avoid the problem in the first place.<br /><br />@Jürgen:<br />IIRC, the main reason were the lack of ghost touches. Touch screens provide erroneous data if a user leans on them or even just rests the hand on them. Leaving the hand up at all times also causes fatigue. While this can be worked around in vertical settings or tabletop settings, it wasn't possible in this particular setup which was essentially a workstation with a near-horizontal screen on the desk in front of the user.<br /><br />Another point mentioned was the hover ability of the tablets, so there can be some UI features based on the hover.Peter Huttererhttps://www.blogger.com/profile/17204066043271384535noreply@blogger.comtag:blogger.com,1999:blog-6112936277054198647.post-42782690149895463492010-10-06T00:23:33.651+10:002010-10-06T00:23:33.651+10:00Very interesting post, especially for a HCI resear...Very interesting post, especially for a HCI researcher like me who also works with multitouch devices. While reading your post, that ATC training part especially caught my attention. Are there any further reasons known to you, why they switched from multitouch devices to tablets?<br />Thanks!Unknownhttps://www.blogger.com/profile/09424505035399836807noreply@blogger.comtag:blogger.com,1999:blog-6112936277054198647.post-74171672426129794372010-10-05T19:40:28.890+10:002010-10-05T19:40:28.890+10:00Did you end up finding a solution to the problem o...Did you end up finding a solution to the problem of working out how to map multi-touch events to windows when those events originated outside of the windows? I lost track of the discussion at some point, but couldn't help wondering whether it could be solved by a multi-touch-aware window manager. Said window manager could decide on an area around the window which "belonged" to that window and visually highlight it however seemed appropriate. That would have elimitated a lot of ambiguity (for the user too) and got rid of any need for grabbing touch points.<br /><br />Sorry if that sounds like a design discussion, but since you have probably already taken your own decisions there you can just treat it like musings.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6112936277054198647.post-26767322208695120072010-10-05T19:32:51.532+10:002010-10-05T19:32:51.532+10:00Quote"Possible scenarios that cause the above...<b>Quote"</b><i>Possible scenarios that cause the above datastream are:<br /><br /> * The user has erroneously selected red, then corrected to colour green, now is painting with two fingers in green.<br /><br /> * The user has selected red with one finger, green with another finger and now wants two paint with two different colours.<br /><br /> * The user has erroneously selected red with one finger, corrected to green and now wants to paint in green and the colour the other finger already had assigned to.<br /><br /> * Two users selected two different colours and now paint simultaneously.</i><b>"/Quote</b><br /><br />How about adding the expectation that the hardware shall implement a mechanism to identify a touch gesture with an id number. The hardware would then be able to report that the first touch has id "abc" and second touch has id "def".<br /><br /><i>Now "abc" is associated with red and "def" is associated with green, and the two touch gesture will report the id & position of each touch to ensure the proper color is drawn</i><br /><br />For simple multi-touch hardware that doesn't implement this mechanism, assume that all touch events have the same id.<br /><br /><i>Since all touches have the same id, then the user(s) selected red, then green, and are now drawing two green lines.</i><br /><br />I can understand if this expectation seems a bit absurd, however, some multi-touch surfaces are demo'd on youtube that use physical objects and cameras to identify the objects (size, color, # of dots on d6 dice, etc). Who knows, maybe future touch surfaces will be able to see the prints on your fingers as you touch and convert that into a reliable id number.Unknownhttps://www.blogger.com/profile/14339497223179843313noreply@blogger.comtag:blogger.com,1999:blog-6112936277054198647.post-25171304821763171532010-10-03T11:35:59.001+10:002010-10-03T11:35:59.001+10:00Hi, Landing here :) , my suggestion maybe begging ...Hi, Landing here :) , my suggestion maybe begging with dual touch , has for example for my touchpad <br /> ALPS_PASS | ALPS_DUALPOINT }, /* Dell Latitude E6410 */ <br /><br />2nd , don't we have some standard or specifications from others ? I mean adopt some rules from others , if we may copy, could be more easier, to do such implementation .Sérgio Bastohttps://www.blogger.com/profile/16442773556221759405noreply@blogger.com