news aggregator

Svetlana Belkin: Ohio Team Wiki Team Wiki Update/Clean Up

Planet Ubuntu - Tue, 2014-08-19 19:23

As my recent project, that is Ubuntu related, and a very overdo task on my To Do list, I worked on updating the Ohio Team Wiki pages (I took some ideas from how the Doc Team Wiki pages look);

  • I removed the unneeded pages after I had approval from Stephen Michael Kellat
  • I lumped similar pages such as the agendas to the meetings and the various events into main pages (/Meetings and /Events) and linked those two main pages on the home page
  • I only have two things left to do which is to fix the banner and find all of the missing records from the mailing list

Hopefully it is cleaner and easier to get the information that one needs.


Bodhi.Zazen: Music practice

Planet Ubuntu - Tue, 2014-08-19 18:26

With the advent of YouTube , there is a plethora of music “lessons” available on the internet. When learning new riffs, however, it is helpful to be able to alter the speed of playback and play selective sections of the lesson.

For some time I have been using audacity, which has the advantage of cross platform availability. However, audacity is a bit of overkill and I find it a bit slow at times.

In addition, when selecting a particular segment within the lesson, skipping dialog or parts already mastered, audacity is a bit “clunky” and somewhat time consuming. Alternately one can splice the lessons with ffmpeg, again somewhat time consuming.

Recently I came across a simple, no frills, light weight solution, “Play it slowly”

Home page

Download (github)

Play it slowly is a light weight application but has a simple , clean interface. It is simple to use and has basic features such as:

  1. Slow the speed on playback without altering pitch.
  2. Easily mark, move, and reset sections of a track for playback.
  3. Easy to start/stop/restart playback.

Play is slowly is in the Debian and Ubuntu repositories

sudo apt-get install playitslowly

For Fedora, first install the dependencies:

yum install gstreamer-python gstreamer-plugins-bad-extras

Download the source code from the above link (version 1.4.0 at the time of this writing)

Extract the tarball and install

tar xvzf playitslowly-1.4.0.tar.gz
cd playitslowly-1.4.0
sudo python setup.py install

For additional options see the README or run:

python setup.py --help

Ubuntu Kernel Team: Kernel Team Meeting Minutes – August 19, 2014

Planet Ubuntu - Tue, 2014-08-19 17:17
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140819 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the first v3.16.1 upstream stable
kernel and uploaded to the archive, ie. linux-3.16.0-9.14. Please test
and let us know your results.
—–
Important upcoming dates:
Thurs Aug 21 – Utopic Feature Freeze (~2 days away)
Mon Sep 22 – Utopic Final Beta Freeze (~5 weeks away)
Thurs Sep 25 – Utopic Final Beta (~5 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~7 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~8 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~9 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Aug. 19):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Sep – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

The Fridge: Interview with Svetlana Belkin

Planet Ubuntu - Tue, 2014-08-19 07:46

Elizabeth K. Joseph: Can you tell us a little about yourself?

Svetlana Belkin: I am Svetlana Belkin, an active Ubuntu Member since July 2013, and I gained my Membership on February 6, 2014. This month will mark my first year of working in the Ubuntu Community.

I am not a developer, I cannot code to save my life!

I am a biology major with a focus on Cellular and Molecular Biology who uses Ubuntu because it and the FOSS world match how I think.

EKJ: What inspired you to get involved with the Ubuntu Community?

SB: An idea for a multi-player online game that is based on Mario Party but instead of mini-games, players use cards that are either attack, defense, or traps to get coins. The one with the most coins wins but everyone can keep the coins that they gained to shop for more cards and avatar items.

This was about one year ago, and I wanted to find someone who could help develop it. Since I am a woman, I joined Ubuntu Women to seek one out. But I quickly found out that it was a bad choice and I started to work on improving the Ubuntu Women Wiki to have it up-to-date. That’s what led me into doing other things within the Ubuntu Community.

EKJ: What are your roles within the Ubuntu community and what plans do you have for the future?

SB: My main role within the Ubuntu Community is to help newcomers to find their place in the Community and to network with women (Ubuntu Women) and scientists (Ubuntu Scientists) alike to improve the FOSS world.

I also help the Ubuntu Documentation team to keep the Ubuntu Community Help Wiki up-to-date.

My future plans are to train new leaders within the Community so they know how to lead.

EKJ: Have you hit any barriers with getting involved and what can you recommend to newcomers?

SB: Newcomers need to remember that they do not need to be a developer to get involved – that’s the barrier that I hit.

I would recommend to newcomers that they should not think that they need to be developers, and they should take these steps: they should start out small, join the team/project and its mailing-list, make sure to read all of the documentation for that project/team, and introduce themselves to the team via the mailing-lists. The best route – if they do not know what skills they have or what teams/projects to join – is to go to their Local Community and ask on the mailing list or their IRC channel.

EKJ: Is there anything you feel the Ubuntu project could improve on when it comes to new folks coming to the project?

SB: The main thing is the lack of Ubuntu Recruitment/Promo/Comms teams where the new folks can join and ask what teams/projects they can put their skills into. The other flavors have these teams but Ubuntu does not.

EKJ: What other things are you interested in outside of open source and Ubuntu?

I make art from time to time, and play my favorite and the only Multi-User Dungeon, Armageddon MUD.

Originally posted by Elizabeth K. Joseph in Full Circle Magazine Issue #87 on July 25, 2014

The Fridge: Ubuntu Weekly Newsletter Issue 379

Planet Ubuntu - Tue, 2014-08-19 04:22

Stephen Michael Kellat: Restart

Planet Ubuntu - Tue, 2014-08-19 00:00

Eventually even a blog can be brought back to life. A simple list may be best in order, for now, at the very least:

  • I now have my General-class amateur radio license in the United States
    • I have to find a good way to make use of it
    • The hamexam package in the archive was a great study aid
    • The use of fldigi on high frequency shortwave bands is now possibly in order
  • I am blessed to have a great team of deputies to assist in leading Ubuntu Ohio
  • Podcasting is still offline for the time being due to work-related circumstances
  • The work of the LoCo Council has gotten interesting though due to OpenID issues I cannot write blog posts on that site to talk about things
  • I have been increasingly assisting in backporting software relating to the pump.io network such as dianara and pumpa

Daniel Pocock: Is WebRTC private?

Planet Ubuntu - Mon, 2014-08-18 19:55

With the exciting developments at rtc.debian.org, many people are starting to look more closely at browser-based real-time communications.

Some have dared to ask: does it solve the privacy problems of existing solutions?

Privacy is a relative term

Perfect privacy and its technical manifestations are hard to define. I had a go at it in a blog on the Gold Standard for free communications technology on 5 June 2013. By pure co-incidence, a few hours later, the first Snowden leaks appeared and this particular human right was suddenly thrust into the spotlight.

WebRTC and ICE privacy risk

WebRTC does not give you perfect privacy.

At least one astute observer at my session at Paris mini-DebConf 2014 questioned the privacy of Interactive Connectivity Establishment (ICE, RFC 5245).

In its most basic form, ICE scans all the local IP addresses on your machine and NAT gateway and sends them to the person calling you so that their phone can find the optimal path to contact you. This clearly has privacy implications as a caller can work out which ISP you are connected to and some rough details of your network topology at any given moment in time.

What WebRTC does bring to the table

Some of this can be mitigated though: an ICE implementation can be tuned so that it only advertises the IP address of a dedicated relay host. If you can afford a little latency, your privacy is safe again. This privacy protecting initiative could be made by a browser vendor such as Mozilla or it can be done in JavaScript by a softphone such as JSCommunicator.

Many individuals are now using a proprietary softphone to talk to family and friends around the world. The softphone in question has properties like a virus, siphoning away your private information. This proprietary softphone is also an insidious threat to open source and free operating systems on the desktop. WebRTC is a positive step back from the brink. It gives people a choice.

WebRTC is a particularly relevant choice for business. Can you imagine going to a business and asking them to make all their email communication through hotmail? When a business starts using a particular proprietary softphone, how is it any different? WebRTC offers a solution that is actually easier for the user and can be secured back to the business network using TLS.

WebRTC is based on open standards, particularly HTML5. Leading implementations, such as the SIP over WebSocket support in reSIProcate, JSCommunicator and the DruCall module for Drupal are fully open source. Not only is it great to be free, it is possible to extend and customize any of these components.

What is missing

There are some things that are not quite there yet and require a serious effort from the browser vendors. At the top of the list for privacy:

  • ZRTP support - browsers currently support DTLS-SRTP, which is based on X.509. ZRTP is more like PGP, a democratic and distributed peer-to-peer privacy solution without needing to trust some central certificate authority.
  • TLS with PGP - the TLS protocol used to secure the WebSocket signalling channel is also based on X.509 with the risk of a central certificate authority. There is increasing chatter about the need for TLS to use PGP instead of X.509 and WebRTC would be a big winner if this were to eventuate and be combined with ZRTP.

You may think "I'll believe it when I see it". Each of these features, including WebRTC itself, is a piece of the puzzle and even solving one piece at a time brings people further out of danger from the proprietary mess the world lives with today.

Jono Bacon: New Facebook Page

Planet Ubuntu - Mon, 2014-08-18 16:35

As many of you will know, I am really passionate about growing strong and inspirational communities. I want all communities to benefit from well organized, structured, and empowerinfg community leadership. This is why I wrote The Art of Community and Dealing With Disrespect, and founded the Community Leadership Summit and Community Leadership Forum to further the art and science of community leadership.

In my work I am sharing lots of content, blog posts, videos, and other guidance via my new Facebook page. I would be really grateful if you could hop over and Like it to help build some momentum.

Many thanks!

Stuart Langridge: Do you do it anyway?

Planet Ubuntu - Mon, 2014-08-18 13:17

Let us imagine that you are a designer, designing a thing. Doesn’t matter here what the thing is; it might be a piece of software or a phone or a car or a coffee machine or a tea cup. Further imagine that there are already lots of people using this thing, and a whole bunch of those people legitimately want it to do something that it currently does not. You’d like the thing to have this feature, but importantly you can’t work out how to add it such that it’s beautifully integrated and doesn’t compromise any of the other stuff. (That might be because there genuinely is no way, or just because you haven’t thought of one yet, and obviously the second of those looks like the first one to you.)

The fundamental question dividing everyone into two camps is: do you do it anyway?

If you add the feature, then you’ll either do so relatively visibly or relatively invisibly. If it’s relatively visible then it will compromise the overall feel of the thing and maybe make it more difficult to use its existing features (because you can’t think of a seamless brilliant way to add it, so it will be unseamless and unbrilliant and maybe get in the way). If you add it relatively invisibly, then most people will not even discover that it exists and only a subset of those will actually learn how to use it at all.

However, if you don’t add it, then lots of people who could benefit from it, want it, and could have it aren’t allowed, even if they’re prepared to learn a complex way to make it happen.

These two camps, these two approaches, are contradictory, in opposition, irreconcilable, and equally valid.

It’s the “equally valid” bit that people have trouble with.

This war is played out, day after day, hour after hour, in every field of endeavour. And not once have I ever seen anyone become convinced by an argument to switch to the other side. I have seen people switch, and lots of them, but it’s a gradual process; nobody reads a frantic and shrill denunciation of their current opinion and then immediately crosses the floor.

There are also many people who would protest this division into two opposing camps; who would say that one should strike a balance. Everyone who says this is lying. It is not possible to strike a balance between these two things. You may well believe yourself to straddle this divide like an Adonis, but what that actually means is that sometimes you’re in one camp and sometimes you’re in the other, not that you’re simultaneously in both. Saying “well, you should add the feature, but as sensitively as possible” means “if it can’t be done sensitively, I’ll do it anyway”. Saying “it’s important to be user-focused, not feature-focused” means “people don’t get to have this thing even if they want it”. Doubtless you, gentle reader, would disagree with one of those two characterisations, which proves the point. Both views are equally valid, and they’re in opposition. If you’re of the opinion that it should be possible to straddle this divide, then help those of your comrades who can’t yet do so; help the do-it-anyways to understand that it’s sometimes better to leave a thing out, and help the deny-it-anyways to understand that some of their users are happy to learn about a thing to get best use from it. But if you’re already in one camp, stop telling the other camp that they’re wrong. We have enough heat and not enough light already.

John Baer: Considering Acer nVidia K1 Chromebook

Planet Ubuntu - Sun, 2014-08-17 20:17

Talked about for nearly a year the nVidia Tegra K1 Chromebook arrives via the Acer Chrombook 13 (CB5-311).

How will this Chromebook stack up against the competition? The newly announced nVidia Shield Tablet gives us a glimpse of what the K1 is capable of.

Despite the largely similar clock speeds compared to the Snapdragon 800 we see that the Tegra K1 is generally a step above in performance. Outside of Apple’s A7 SoC and x86 SoCs, NVIDIA is generally solidly ahead of the competition.

When it comes to GPU performance, there’s really no question: the Tegra K1 is easily the fastest in all of our GPU benchmarks. It handily beats every other ARM SoC, including the newest generation of SoCs such as the recently introduced Snapdragon 805 and its Adreno 420 GPU. It’s worth noting that the Snapdragon 805 is likely aimed more at smartphones than tablets, although we are looking at its performance in Qualcomm’s tablet development platform here. Until we get a look at Snapdragon 805 power consumption we can’t really draw any perf/watt conclusions here. Ultimately, the only thing that can top the Shield Tablet is Surface Pro line, which uses more powerful laptop-class hardware.

Source: AnandTech

For $279 you get the following.

  • NVIDIA Tegra K1 Quad Core 2.1 GHz Processor
  • 2 GB DDR3L SDRAM
  • 16 GB Internal Storage
  • 13.3-Inch Screen, NVIDIA Kepler GPU with 192 NVIDIA CUDA cores
  • 1366 x 768 pixels Screen Resolution
  • Chrome OS
  • 802.11 A/C WiFi
  • 13-hour battery life

For $379 you get all of the above and the following.

  • 4 GB DDR3L SDRAM
  • 32 GB Internal Storage
  • 1920 x 1080 pixels Screen Resolution
  • 11-hour battery life

As good as the above looks, I do not expect the early reviews to be kind.

CPU Performance

The first complaint will be the anticipated K1 Google Octane score of 7628 (some are suggesting it may achieve 8000). The new Acer C720 (Core i3-4005U, 4GB RAM) boasts an Octane score of 14530. The Acer C720 (Celeron 2955U, 2GB RAM) boasts an Octane score of 11502. Admittedly this added power comes at the expense of battery life but the rated 7.5 hours from these Intel powered Chromebooks is adequate and many feel an Octane score of 11000 is the minimum required to support a quality Chrome OS user experience.

TN Screen Panel

Manufactures are reluctant to offer IPS screen panel options. The reason is the Education market and cost. Education is a big driver in today’s Chromebook market and TN quality displays are considered adequate.

Build Quality

For a laptop priced at less than $400 unibody polycarbonate plastic is acceptable as long as it doesn’t flex or bend. Historically Chromebooks from Acer have been criticized for their build quality.

Saving Grace

GPU Performance

As noted in the Shield Tablet review the Kepler GPU is best in class. Assuming the keyboard and track pad perform well and the TN panel does a reasonably good job in displaying content, the GPU could move this Chromebook to the head of the class.

Gaming

You don’t really think of a Chromebook as a gaming console but if you are targeting a teen to young adult audience, gaming may be the item which closes the deal.

We game on all our devices, so why should a Chromebook be any different? With the upcoming launch of addicting games like Miss Take and Oort, along with the promise of great titles thanks to the adoption of WebGL in Unreal Engine 4 and Unity 5, the future of Chromebooks is looking really fun.

Source: TegraZone

Are you considering the Acer nVidia K1 Chromebook? The standard and HD models are up for pre-order on Amazon.

The post Considering Acer nVidia K1 Chromebook appeared first on j-Baer.

Andrew Pollock: [tech] Solar follow up

Planet Ubuntu - Sun, 2014-08-17 17:32

Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.

Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.

I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.

Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.

On a good day, it seems I'm generating about 10 kWh of energy.

I'm still impatiently waiting for PowerOne to release their WiFi data logger card. Then I'm hoping I can set up something automated to submit my daily production to PVOutput for added geekery.

Mathieu Trudel: Focusing and manual skills

Planet Ubuntu - Sun, 2014-08-17 01:09
I don't think anyone can argue against the fact that to be an effective developer, you need to somehow attain sufficient focus to fully take in the task at hand, and be sufficiently in the zone that answers to the problem at hand come naturally. In a way, programming is like that, a transcendent form of art.

At least, it is, to some degree, for me. And I do feel that given sufficient focus, calm and quiet (or perhaps background noise, depending on the mood I'm in), I can get "in the zone", and solutions to what I'm trying to do come somewhat naturally. Not to say that I'm necessarily writing good code, but at least it forms some sort of sense in my mind.

People have different ways to achieve focus. Some meditate, some have it come to them more easily than others. It does happen that for some people, it works well to execute some kind of ritual to get in the right frame of mind: those can be as insignificant as getting out of bed in a certain way (for those fortunate enough to work from home), or a complicated as necessary. I believe many, if not most, integrate it in their routine, to the point they perhaps forget what it is that they do to attain focus.

For me, it now happens to be shaving, and the associated processes. It used to be kind of a chore, until I picked up wet shaving, and in particular, straight razor shaving.

There's nothing quite like putting a naked, extremely sharp blade against your skin to get you to only think about one thing at a time :)

I won't lie, the first shave with that relic was a scary experience. I wasn't at all sure of myself, with only a few tips and some videos on Youtube as training. I had bought a straight razor from Le Centre du Rasoir near my house after stumbling on articles about barbershops on the web, and it somehow interested me.

Since then, I've slowly taken up the different tasks that go with the actual act of shaving with a straight razor: honing the blade, stropping, shaving, etc.; picking up the different tools required (blade, strop, honing stones, shaving creams or soaps, etc.). It's as I was slowly honing and restoring four straight razors that got to me from eBay and as a gift from my father than I thought of writing this post, in a short break I took from the honing. Getting back home and putting the finishing touches on the four razors got me to think, and I noticed I had again become much more relaxed just by taking the time to do one thing well, taking care in what I was doing.



I think every developer.... well, everyone can benefit in acquiring some kind of ritual like this, using our hands rather that our brain to achieve something. It's at least a great experience to get a little bit away from technology for a short while, visiting old skills of earlier times.

As for the wet shaving itself, I'd be happy to respond to comments here, or blog again about it if there's enough interest in the subject; I'd love to hear that I'm not the only one in the Ubuntu and Debian communities crazy enough to take a blade to my face.

Costales: Destino Ubuconla 2014 - #5 Ubuconla, día 2

Planet Ubuntu - Sat, 2014-08-16 18:55
Uf... que dolor de cabeza... La ducha y el jugo del desayuno sentó muy bien para despejar.

Volvimos con Fernando y Marta a la Universidad donde se realiza la Ubuconla.
Esta vez abrí yo con una charla de creación de webapps. Las tablas se van notando, estando mucho más tranquilo, aunque Naudy me dejó en blanco interrumpiéndome para darme un DVD de Ubuntu (que se sortean al final) en medio de la conferencia (¿!?).
Requisitos para una webapp Comenzando el tallerEl manifiesto de una webappCómo pregunta una web por instalar la webappCómo leer las propiedades de una web
A la vez, Fernando Lanero daba la charla a los niños, para preparar la obra de teatro.
Fernando Lanero comienza la charla a los alumnos¡Qué atentos!
El resto de la mañana apenas tuve tiempo para atender al resto de conferencias, porque varios asistentes me preguntaron dudas o problemas que tenían con sus portátiles, siempre buen momento para hacer nuevos amigos como Elías y pasaron voladas las 2 horas que había hasta la hora de comer.
Resolviendo una duda puntualY un problema que me llevó más de lo previsto
Fernando Amen en su charlaSobre I-Linux
Otra charla de por la mañanaLos alumnos haciendo la primera obra de teatroY comienzan con la segundaFernando y Marta con el profesor de los alumnosFrancisco Javier Pérez El público abarrotó todas las charlas, desde la primera a la última
Charlas, charlas y más charlas
Fernando y Marta marcharon al hotel, porque se encontraban con diarrea.

La comida me gustó mucho, nos invitó la Universidad en una sala donde estábamos todos los organizadores y conferenciantes y estuvo genial compartir un rato tranquilo con el resto de compañeros, especialmente con Fernando García Amen, Sergio Meneses, Dante y Naudy.
Una comida muy amenaEstuvo genial la invitación de la Universidad Tecnológica Simón Bolivar
Tras la comida, conseguí solucionar un problema de un chico que necesitaba un servidor en su portátil y atendí a la charla muy interesante de Ubuntu en sistemas embebidos.
Ubuntu en sistemas embebidosEl hardware también tuvo su protagonismo
La Cubieboard fue la gran triunfadoraLa charla estuvo genialIncluso con una pequeña demostración
Me impresionó mucho la potencia de SAGEMATH, en el taller impartido por Emmanuel.
Emmanuel con su taller de SAGEMATHUna de las mejores charlas
La pasión que transmite Emmanuel hace que te apasionen las matemáticasLu también empezó a tener fiebre y diarrea, así que volvimos pronto para el hotel para que descansase. Tomó un Ibuprofeno y quedó dormida.

A las 19:00 se quedó en la Torre del Reloj para ir a cenar con los conferenciantes que fueran. Yo no quería dejarla sóla, pero me convenció diciendo que realmente iba a dormirse.

Así que a las 18:50 ya estaba como un clavo en la Torre del Reloj. El primero en llegar fue Fernando García Amen, luego Victoria y finalmente Sergio sobre las 19:15 ¡cof cof cof, ejem! ;) Le provoqué diciéndole que si esperábamos un poco más, porque el día anterior a las 19:03 ya se piraron :P Y en un momento ya eramos ocho.

Paseamos un poco hasta parar en un italiano a cenar. Yo opino que viajando hay que intentar comer lo típico del lugar, pero bueno, lo importante en este caso es la compañía. En la cena charlamos y reímos mucho, sobre todo con lo que escogió alguno de los comensales para cenar, que no puedo ni hacer público jajaja > ; )
Una cena inolvidable
Tras la cena, un agradable paseo hasta una heladería donde te dan un delicioso helado de 2 bolas que debe pesar 400 gr :S Eso, junto a los espaguetis con tomate... ¡Toma dieta! :P

Y ya volví con Lu, dejando al resto del equipo que disfrutase de la noche cartaginesa :)
Y mañana el último día de la Ubuconla

Continúa leyendo más de este viaje.

Daniel Pocock: WebRTC: what works, what doesn't

Planet Ubuntu - Sat, 2014-08-16 15:49

With the release of the latest rtc.debian.org portal update, there are numerous improvements but there are still some known problems too.

The good news is that if you have a web browser, you can probably make successful WebRTC calls from one developer to another without any need to install or configure anything else.

The bad news is that not every permutation of browser and client will work. Here I list some of the limitations so people won't waste time on them.

The SIP proxy supports any SIP client

Just about any SIP client can connect to the proxy server and register. This does not mean that every client will be able to call each other. Generally speaking, modern WebRTC clients will be able to call each other. Standalone softphones or deskphones will call each other. Calling from a normal softphone or deskphone to a WebRTC browser, or vice-versa, will not work though.

Some softphones, like Jitsi, have implemented most of the protocols to communicate with WebRTC but they are yet to put the finishing touches on it.

Chat should just work for any combination of clients

The new WebRTC frontend supports SIP chat messaging.

There is no presence or buddy list support yet.

You can even use a tool like sipsak to accept or send SIP chats from a script.

Chat works for any client new or old. Although a WebRTC user can't call a softphone user, for example, they can send chats to each other.

WebRTC support in Iceweasel 24 on wheezy systems is very limited

On a wheezy system, the most recent Iceweasel update is version 24.7.

This version supports most of WebRTC but does not support TURN relay servers to help you out of a NAT network.

If you call between two wheezy machines on the same NAT network it will work. If the call has to traverse a NAT boundary it will not work.

Wheezy users need to either download a newer Firefox version or use Chromium.

JsSIP doesn't handle ICE elegantly

Internet Connectivity Establishment (ICE, RFC 5245) is meant to prevent calls from being answered with missing audio or video streams.

ICE is a mandatory part of WebRTC.

When correctly implemented, the JavaScript application will exchange ICE candidates and run the connectivity checks before alerting anybody that a call is ringing. If the checks fail (for example, with Iceweasel 24 and NAT), the caller should be told the call can't be made and the callee shouldn't be disturbed at all.

JsSIP is not operating in this manner though. It alerts the callee before telling the browser to start the connectivity checks. Then it even waits for the callee to answer. Only then does it tell the browser to start checking connectivity. This is not a fault with the ICE standard or the browser, it is an implementation problem.

Therefore, until this is fully fixed, people may still see some calls that appear to answer but don't have any media stream. After this is fixed, such calls really will be a thing of the past.

Debian RTC testing is more than just a pipe dream

Although these glitches are not ideal for end users, there is a clear roadmap to resolve them.

There are also a growing collection of workarounds to minimize the inconvenience. For example, JSCommunicator has a hack to detect when somebody is using Iceweasel 24 and just refuse to make the call. See the option require_relay_candidate in the config.js settings file. This also ensures that it will refuse to make a call if the TURN server is offline. Better to give the user a clear error than a call without any audio or video stream.

require_relay_candidate is enabled on freephonebox.net because it makes life easier for end users. It is not enabled on rtc.debian.org because some DDs may be willing to tolerate this issue when testing on a local LAN.

Ubuntu Podcast from the UK LoCo: S07E20 – The One with All the Cheesecakes

Planet Ubuntu - Sat, 2014-08-16 09:33

We’re back with Season Seven, Episode Twenty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating the Co-op’s Red Velvet cake in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

  • We discuss whether Google are eating our lunch? Not literally. At least, I hope not…

  • We also discuss:

  • We share some Command Line Lurve that tells you the day of the week (modified from @climagic):
    $ date -d "Nov 15" | cut -d' ' -f1 Sat $ date -d "Nov 15 2015" | cut -d' ' -f1 Sun $ date -d "Nov 15 2016" | cut -d' ' -f1 Tue
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Ronnie Tucker: Ubuntu Shopping Lens (Scopes) Declared Legal in the UK and Most Likely in the European Union

Planet Ubuntu - Sat, 2014-08-16 09:09

The UK authorities have declared that the Ubuntu Shopping Lens are legal and that no laws have been broken, either in Great Britain or in the European Union.

Some of you might remember that Canonical took a lot of flak from the community when the developers decided to integrate the Shopping Lens into the Ubuntu operating system. Two years have passed since then and a lot of things have changed in the meantime.

For one, the Lens are now called Scopes, but that’s beside the point. When the Ubuntu Shopping Lens were first introduced, users didn’t have any control over them, at least not in a clear and easy way. There was no warning that data was sent over the network and there was no button to turn it off.

Currently, very few people even mention the Shopping Lens, and that is a clear sign that users have gotten used to them and that they have learned to use them or shut the functionality off entirely.

Source:

http://news.softpedia.com/news/Ubuntu-Shopping-Lens-Scopes-Declared-Legal-in-UK-and-Most-Likely-in-European-Union-453843.shtml

Submitted by: Silviu Stahie

Costales: Destino Ubuconla 2014 - #4 Ubuconla, día 1

Planet Ubuntu - Fri, 2014-08-15 23:08
A las 8:30 nos acercamos con Fernando y Marta a la Universidad Tecnológica de Bolivar para el primer día de la Ubuconla.

Comenzando en 3, 2, 1...
Siempre me sorprenderá la gran cantidad de asistentes que hay en los eventos de software libre en Latinoamérica, pero es que a primera hora, wow, esto ya estaba petado, totalmente lleno. ¡Impresionante!

Centenas de asistentesCon su correspondiente inscripción
El día comenzó con la presentación del evento por Jairo Serrano, Decano de la Universidad Tecnologica de Bolivar, Bart y Sergio.

Jairo Serrano
Bart
Sergio MenesesContinuó Fernando con la migración de su colegio, a quien entrevisté hace tiempo sobre este mismo tema. La charla fue muy amena y acabó con un pequeño sorteo para quien más atento estuvo durante la conferencia.

Fernando LaneroCon muchísimos asistentes
explicando la migración de su colegio a Ubuntu¡Qué diapositiva! :))Tras su charla había 30 plazas para aprender Ubuntu Touch y mucha gente interesada. También había otra simultánea sobre cómo localizar Ubuntu por Dante Díaz Figueroa.
Yo escogí la de Dante, por mi pasado como localizador al asturianu y aproveché a aclarar alguna duda personal que Dante me resolvió muy profesionalmente.

Dante explicó cómo localizar UbuntuLa atención fue máximaTras Dante, Naudy nos mostró el buen trabajo de llevar un portátil con software libre a los estudiantes venezolanos de mano de la distro Canaima.
Naudy explicó CanaimaEnseñando también el netbook que se da a los estudiantes de Venezuela
Los más peques también fueron parte de esta Ubuconla :DY primer gran obra de teatro de los escolares, un proyecto nuevo para estos eventos, que consiste en reunir a alumnos de colegios y explicarles muy por encima qué es esto del software libre y el sentimiento Ubuntu.
Una experiencia pioneraTras la explicación, tienen que montar una obra de teatro que presentarán a todos los asistentes al evento y que obviamente, tendrá que tratar sobre Ubuntu.
Primera obra de teatro
Otra de las obras de teatro
"Conjuntando software libre con la docencia", como bien definió Cesar Vázquez, profesor del grupo de estudiantes, a los que Fernando les dio la introducción.

Todos los protagonistas que dieron una versión distinta del sentimiento Ubuntu
Tras la innovadora obra de teatro de los alumnos, nos fuimos a comer todos juntos a un restaurante típico cercano. La comida estuvo genial, pero la charla mejor aún :D
¿Un resumen del primer día? > : )
Y un poco antes de las 2 me acerqué con Lu para preparar el ordenador para mi conferencia sobre seguridad.
Cuando llegué estaban dando una conferencia que comenzó a las 13:00 y me perdí :( La verdad que está bien varias charlas simultáneas, pero es una pena cuando te interesan todas y sólo puedes ir a la mitad :P
Snif, la charla que me perdíMi primer conferencia salió muy bien, a pesar de mis nervios iniciales. Además hubo un montón de preguntas :)) Al salir de la conferencia me sonrojaron Nel y Víctor, pidiéndome un autógrafo y fotografiándose conmigo :$
Comenzando mi primera conferencia
Estaba más nervioso de lo que parecíaY la audiencia muy interesada en ellaSobre seguridad no podía dejar hablar de Gufw
El resto de la tarde disfruté de la charla de Rodny Silgado sobre Inkscape
Aprendiendo sobre Inkscape y de Jiliar Silgado de introducción a Python:
Aprendiendo cosas de Python
Asistentes y más asistentes :DTras la Ubuconla nos fuimos a los hoteles, para tras recuperar fuerzas con una ducha fría quedar todos a las 7 en el centro.
Pasamos a buscar a Fernando y Marta, que se habían quedado fritos y tardaron un rato en bajar. Al llegar al punto de encuentro unos 10' tarde sólo encontramos a Dante, pero por casualidad, él no estaba allí para ir a cenar, si no de paseo.
Esperamos una media hora y al ver que no venía nadie, cenamos los 4 en un restaurante típico enfrente del hotel de Fernando. La verdad, que la comida no fue tan rica como la de ayer y encima más cara.

A las 21:30 tiramos a la discoteca donde Bart invitó a cervezas y ron. La discoteca tenía una decoración muy buena, la música rumbera tirando a bien y lo mejor, como siempre, la compañía, en especial conocer a Emmanuel Armando Rosales, un matemático super simpático y folixeru, junto a otros compañeros :))
No todo va a ser Ubuntu :)Sobre las 2 de la madrugada volvimos para el hotel tras el primer día de la Ubuconla en el que lo pasé genial, compartiendo momentos muy amenos con la comunidad que hace que Ubuntu sea lo que sea, único :)
Un día rodeado de...
... compañeros excepcionales
Tux también se coló en la fiestaUn día inolvidable



Los portátiles dicen mucho del dueño :PY mañana, más ;)
Continúa leyendo más de este viaje.

Alexander Sack: Not Found

Planet Ubuntu - Fri, 2014-08-15 22:35

The URL you requested could not be found.

Rafał Cieślak: Multi-OS gaming w/o dual-booting: Excelent graphics performance in a VM with VGA passthrough

Planet Ubuntu - Fri, 2014-08-15 19:57

Note: This articles is a technology/technique outline, not a detailed guide and not a how-to. It explains what is VGA passthrough, why you might be interested in it, and where to start.

Even with the current abundance of Linux native games (both indies and AAAs), with WINE reliably running almost any not-so-new software, many gamers who use Linux on a daily basis tend to switch to Windows for playing games. Regardless of one’s attitude towards non-free software, it has to be admitted that if you wish to try out some of the newest titles, you have no other choice than running them on a Windows installation. This is why so many gamers dual-boot: having installed two operating systems on the same machine and using Windows for playing games and Linux for virtually anything else, they limit their usage of Microsoft’s OS for gaming only. This popular technique seems handy – you get the luxury of using a Linux, and the gaming performance of Windows.

But dual-booting is annoying because of the need of reboot to switch your context. Need to IM your friend while playing? Save your game, shut down Windows, reboot to Linux, launch IM, reboot to Windows, load your game. Switching takes a long time, is inconvenient, and therefore the player may feel discouraged to do so.

What if you could run both operating systems at once? That’s nothing new, run a virtual machine in your Linux, install Windows within it, and voilà! But a virtual machine is no good for gaming, the performance will be utter cr terrible. Playing chess might work, but any 3D graphics won’t do because of the lack of hardware acceleration. The VM emulates a simple graphics adapter to display it’s output in a window of the host OS.

And that is where VGA passthrough comes in, and solves this issue.

1. The idea

The key to getting neat graphics in a VM is to grant the virtual machine a full access to your graphics card. This means that your host OS will not touch this piece of hardware at all, and the guest OS will be able to use it as any other (emulated) hardware. The guest OS (presumably Windows) will load it’s own drivers for the graphics adapter, and will communicate with it natively! Therefore it will have full access to hardware acceleration and any other goodies that gear might provide (eg. HDMI audio). The idea of passing a VGA adapter to a virtual machine is usually named VGA passthrough.

Sounds crazy? Let me tease you: my setup is capable of smoothly running Watch_Dogs, Tomb Raider (2013) on Ultra settings at 60+ FPS within that virtual machine, using NVIDIA’s GTX 770. And I get the luxury of running both OS at once – so I can switch between them in just a glimpse, without shutting down either one! This is astonishingly convenient.

Because the dedicated graphics hardware will be reserved for the guest system, the host will need another graphics adapter to display anything. So there comes the first hardware requirement: you need at least two graphic adapters. However, it is not uncommon – many new Intel processors have a build-in GMA – and if you are a gamer, chances are you have invested in a dedicated graphics card – so that makes two graphics adapters already. Let the host system use integrated graphics, and the guest will get the powerful dedicated graphics for games. Because both graphic adapters will work independently and there is no way to compose their video output¹, you will need two separate displays, one for each system. This means either a set of two monitors, or a monitor with two video inputs (so that you can switch between them). You might also experiment with a KVM switch.

Also keep in mind that it is not an easy thing to set up. While some claim they have succeeded on their first try, many others have struggled a lot. Personally, I spend about two weeks tuning things up to get my VGA passthrough running – and if we count hardware searching and preparations, then it took me two months. But it was completely worth it! My current setup contains of:

  • Intel i7-4790K (4 x 2 x 4.0GHz)
  • ASRock Z97 Extreme6
  • NVIDIA GTX 770 4GB
  • and some 16 gigs of RAM
  • also, a monitor with multiple video inputs (I switch video source using buttons on the monitor)
  • Ubuntu 14.04

As I have mentioned, this set is capable of running very demanding games at maxed settings with amazing results. How does it work in practice? It feels as if I was running both systems at once. For example, while playing a game under Windows, my Linux has an IM client running. Because I mix the sound from both systems, I can hear the notification when I get a message. So I pause the game, switch monitor video source with a hotkey shortcut, respond to the message, and switch the video back. If only I had two monitors, I would play on one of them, with the host system using the other one – so I wouldn’t even need to touch the monitor to switch the OS, I would just need to rotate my head a little bit ;-)

Getting here was a lot of work, but a lot of fun too! The first step is to meet the…

2. Hardware requirements

Yeah. Not every machine will be able to do this trick. As already mentioned, you need two graphics adapters. However, it is not possible to passthrough the graphics integrated in your CPU! This is because passthrough works by separating a PCI device from the host system, and attaching it to the guest OS. Therefore you can only pass a dedicated graphics hardware. Not much of a problem, probably, but it’s probably an important note.

You also need to ensure that your CPU and mainboard support IOMMU – extensions for I/O visualisation, which are necessary for passing through a PCI device. Intel calls their IOMMU technology “VT-d“, while AMD refers to it as “AMD-V“. This is an absolute must, so if you are buying new hardware, make sure both your processor and the chipset will support IOMMU²!

Also, if you plan to use a CPU integrated graphics adapter for the host system, make sure that the mainboard supports it, and that it has a video output!

You will get best results if you use a multi-core CPU. Demanding games will require not only powerful graphics hardware, but a decent CPU as well! It is possible to reserve some of CPU’s cores for the VM – this way you can ensure that the guest OS will be granted enough computational power. For example, in my setup, the host OS uses 2 cores, while other 6 are at Windows’ disposal.

Also, as explained, you need a monitor with several inputs, or a set of two. I am not aware of any way to get this working on a laptop, as most of laptops I know have just one monitor, and you cannot manually switch between video sources¹.

So the full list of requirements is:

  • IOMMU compatible CPU and mainboard
  • A dedicated PCI graphics adapter (for passing through)
  • Graphics hardware for the host OS (can be integrated in CPU)
  • Monitor with multiple video inputs (recommended two monitors)
  • (Recommended: multi-core CPU).

Warning: Note that you DO NOT NEED a multi-OS graphics card! Contrary to popular belief, non-Quatro NVIDIA cards will work well, with no hardware modifications of any kind!

3. Methods

There are two popular passthrough techniques – one involves Xen virtualization, and one using Qemu and VFIO. Having played around with both, I am personally a fan of the Qemu way – it seems it is much easier to set up, I get more control over my VM, customizations are easier, and, most importantly, it works with virtually any PCI graphics adapter!

There is a lot of confusion on the Internet concerning what results each method may yield. Some say that Qemu method can never grant any decent performance, they claim that only Xen can perform primary VGA passthrough, while Qemu’s secondary VGA passthrough will be very inefficient. However, numerous people (including me) confirm that they have awesome performance with Qemu. On the other hand, it is clear that passthrough with Xen will only work with multi-OS graphic cards. This is not a problem for Radeon users, as probably all new Radeons will do just fine with Xen. However, if your NVIDIA card is not an NVIDIA Quadro, you have no chances with Xen! – unless you burn several resistors on the board, which can mod your card so that it thinks it is a Quadro… I do not recommend such hardware modifications to anyone, even if you trust the Internet too much, the risk of rendering your precious hardware useless is far too high to make it work the effort. Qemu, on the other hand, should work well with absolutely any PCI card.

Given these reasons, as well as customization options, I have decided to stick with Qemu. For the rest of this article, I will be describing this particular method.

There is one particular comprehensive guide on how to setup everything using the Qemu method here – at the time of writing this forums thread has more than 2500 replies, so learning details from here may be hard, but on the other hand every possible scenario is covered somewhere in there :) I can highly recommend that guide, but if you want to learn about the general idea first, stay with me before you jump there!

4. The software

Obviously things won’t work out of the box. There are also necessary preparations on the software side.

First, you will need to patch your kernel a bit, and compile it with several options enabled. At the time of writing, ASC override patches and VGA arbiter fixes need to be applied manually, as they are not (yet?) included in the kernel. You can find details in that guide I linked.

You will need to configure your kernel a bit. The key is not only to ensure it activates appropriate IOMMU modules, but also to forbid it from loading any drivers to the card you will want to pass through.

Most likely it will be also necessary to use the git development version of Qemu – some necessary features are not yet available in stable releases. Also, when playing with qemu, it is worth to try KVM – chances are that hardware virtualization might significantly improve virtual machine’s CPU performance.

You may want to write a bit of scripts that set up few other details (binding the PCI card to vfio module) before you start qemu to run the virtual machine.

Also, it may be tricky to get the right order of installing drivers in the guest OS. It took me a while to realize that I need to disable qemu’s emulated VGA – otherwise NVIDIA drivers won’t detect the dedicated hardware :-)

The greatest issue I have met is that Windows is very sensitive to hardware changes. Even slightest changes in my virtual machine (different qemu options) would immediately cause my Windows to never boot anymore, and any web guides on dealing with these particular BSoDs on boot never helped… So eventually I had to re-install the whole guest OS, after ~10 times I am completely fed up with it. However, if I do not experiment with qemu settings, there are no such problems at all.

5. Peryphetials

How about keyboard/mouse, should you pass them through too? You might, but this is not necessary; I use Synergy for sharing my mouse/keyboard between systems just as if they were two displays of one system. Very convenient. The script that starts qemu for me also launches synergy server on my Linux, the client running in Windows starts automatically on boot.

If you want, you can also setup networking for the guest system – qemu has very good support for interface bridging, so it is not difficult to grant internet access for the guest OS.

One could also pass-through audio devices, but I believe this is not necessary – especially if you do not care about hardware audio acceleration; in such case you can get qemu to emulate a sound device and play it as any other app in the host OS would do. In result you can hear both systems on same speakers/headphones!

Personally, I have even went so far that I prepared a simple app that talks to my monitor via I²C and tells it when to switch video input – this way I can use a hotkey shortcut instead of navigating it’s OSD menus. The same hotkey will switch my keyboard/mouse between systems, thanks to synergy’s customizability.

 6. Conclusions

I have used this configuration for a few weeks now, and I am yet to find a game that would not perform outstandingly in this environment. Graphics performance is just as if I dual-booted, CPU performance is only a tiny bit worse (but still awesome). The ability to keep all my apps running under Linux while I play games, be it a web browser, IM client, teamspeak or whatever else might be useful – is incredibly convenient!

Switching between systems in less then a second is really a game-changer for me (pun intended…)!

If you are excited about this technique, go ahead and read the guide. Be ready for a challenge, and do not give up it things won’t work – you won’t regret it! Good luck!

Want to know more? I will be happy to answer your general questions, but if you need help or want to learn about technical details, the best place to find answers is here.

 

 

¹) Unless your motherboard has a video multiplexer, like NVIDIA Optimus… but using it would be difficult, as you would need to manually control the mux. I believe this might be achievable, but most certainly would require specialized drivers, that do not exist right now.

²) It’s not as simple as “all new hardware supports it”, both in case of CPUs and mobos. You may find some lists of IOMMU-compatible hardware on the Internet, but it is probably best to ask the manufacturer itself – if they do not list it on their website, try dropping an email – from my experience all manufacturers are very keen to respond to enquiries concerning such sophisticated features! ;-)


Filed under: PlanetUbuntu, Ubuntu

Jono Bacon: Community Management Training at LinuxCon

Planet Ubuntu - Fri, 2014-08-15 19:27

I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both. The opportunity here is to grow large, well-managed, passionate communities, no matter what industry or area you work in.

I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.

LinuxCon North America and Europe

I am delighted to bring my training to the excellent LinuxCon events in both North America and Europe.

Firstly, on Fri 22nd August 2014 (next week) I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.

Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.

Space is limited, so go and register ASAP:

What Is Covered

So what is in the training course?

If you like videos, go and watch this:

If you prefer to read, read on!

My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.

The day is mapped out like this:

  • 9.00am – Welcome and introductions
  • 9.30am – The core mechanics of community
  • 10.00am – Planning your community
  • 10.30am – Building a strategic plan
  • 11.00am – Building collaborative workflow
  • 12.00pm – Governance: Part I
  • 12.30pm – Lunch
  • 1.30pm – Governance: Part II
  • 2.00pm – Marketing, advocacy, promotion, and social
  • 3.00pm – Measuring your community
  • 3.30pm – Tracking, measuring community management
  • 4.30pm – Burnout and conflict resolution
  • 5.00pm – Finish

I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.

I hope to see you there!

Pages

Subscribe to Free Software Magazine aggregator