A few people have observed dropouts using WebRTC, including the new Debian community service at rtc.debian.org
I've been going over some of these problems and made some improvements. If you tried it before and had trouble, please try again (rtc.debian.org is running with some of the fixes already).
In this case, you may see the picture of the other person for a split second and then the websocket link disconnects and JSCommunicator re-initializes itself.
People observing this problem had sometimes found that audio-only calls would work more frequently.
I believe one reason for this problem has been some incorrect handling of the OpenSSL error queue. I posted my observations about this on the resiprocate developer's list and included a fix for it in the recent 1.9.7 release.Call starts, no remote picture appears, call stops after 20 seconds or so
A BYE message usually includes a reason such as this:
Reason: SIP ;cause=200; text="RTP Timeout"
This particular reason (RTP Timeout) may indicate that the TURN server is faulty, not running at all or is not reachable due to one or both users being behind a firewall.
If you experience this problem but some other reason code appears, please share your observations on the repro-users mailing list so it can be improved.
Find joy and beauty everywhere, every day, every moment. It is around you; learn to see it.
Love yourself, and treat yourself with kindness.
Breathe deeply, and drink lots of water.
Love with an open heart, and care for those who love you back.
Spend no time and energy on those who pull you down, even family and "friends."
Value your opponents; they keep you honest, and learning. Collaborate with them if possible.
Make your bed every morning. Excuses are boring!
Brush and floss every tooth you want to keep. Get regular checkups, and follow the expert's advice.
Listen twice as much as you talk. That's why you have two ears, and only one mouth.
If you mess it up, clean up the mess. NOW.
Stay active, and keep challenging yourself physically, mentally, emotionally, and socially.
Do something scary as often as you dare. Travel! Make friends with strangers!
Spend your time and energy on the important, rather than being distracted by the urgent.
If you are unhappy, do something kind for someone else, secretly if possible.
Laugh, sing and dance as often as you can. Celebrate!
Great time, super well organized by this year’s TCamp staff. Really outstanding. Lots of really amazing discussion, and I feel a lot of effort is finally jelling around Open Civic Data, which is an absolute thrill for me.
Can’t wait to see what the next few months bring!
Last week i was invited by Canonical to their Client Sprint in Malta among five others community Core Apps developers :
- Andrew Hayzen and Victor Thompson : Music App developers
- Riccardo Padovani : Calculator/Reminders App developer
- Kunal Parmarl : Calendar App developer
- Nekhelesh Ramananthan : Clock App developer
There have been a lot of discussions about new designs(specially the new header and bottom edge), a lot of autopilot tests fixes and sure the Core Apps devs have provided a lot of feedback about their experience using the SDK components, QTC, Click tools(packaging) and writing autopilot tests.HTML5 SDK
In Tuesday, me Alex Abreu and David Barth had a meeting to discuss what needs to be done for the HTML5 SDK, then i started working on the implementation of the new header design, which will be done in steps.Tabs PageStack
We still need to add more new APIs to the header, but this will involve updating autopilote tests to make sure everything works as expected.
I have also started working on updating some componenets like the Slider and Switch/Toggle while i was coming back homeSlider
We still have a lot things in the pipe(i18n, grid system, sheets), stay tuned!
It was a wonderful and very productif week, thanks to everyone and specially Michelle & Popey!
I hope we will be invited again next time.
This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.
When you see Ubuntu on a cloud it means that Canonical has a working relationship with that cloud vendor, and the Ubuntu images there come with a set of guarantees:
- Those images are up to date and secure.
- They have also been optimised on that cloud, both for performance and cost.
- The images provide a standard experience for app compatibility.
That turns out to be a lot of work for us to achieve, but it makes your life really easy.Fresh, secure and tasty images
We update the cloud images across all clouds on a regular basis. Updating the image means that you have more of the latest updates pre-installed so launching a new machine is much faster – fewer updates to install on boot for a fully secured and patched machine.
- At least every two weeks, typically, if there are just a few small updates across the board to roll into the freshest image.
- Immediately if there is a significant security issue, so starting a fresh image guarantees you to have no known security gotchas.
- Sooner than usual if there are a lot of updates which would make launching and updating a machine slow.
Updates might include fixes to the kernel, or any of the packages we install by default in the “core” cloud images.
We also make sure that these updated images are used by default in any “quick launch” UI that the cloud provides, so you don’t have to go hunt for the right image identity. And there are automated tools that will tell you the ID for the current image of Ubuntu on your cloud of choice. So you can script “give me a fresh Ubuntu machine” for any cloud, trivially. It’s all very nice.Optimised for your pocket and your workload
Every cloud behaves differently – both in terms of their architecture, and their economics. When we engage with the cloud operator we figure out how to ensure that Ubuntu is “optimal” on that cloud. Usually that means we figure out things like storage mechanisms (the classic example is S3 but we have to look at each cloud to see what they provide and how to take advantage of it) and ensure that data-heavy operations like system updates draw on those resources in the most cost-efficient manner. This way we try to ensure that using Ubuntu is a guarantee of the most cost-effective base OS experience on any given cloud.
In the case of more sophisticated clouds, we are digging in to kernel parameters and drivers to ensure that performance is first class. On Azure there is a LOT of deep engineering between Canonical and Microsoft to ensure that Ubuntu gets the best possible performance out of the Hyper-V substrate, and we are similarly engaged with other cloud operators and solution providers that use highly-specialised hypervisors, such as Joyent and VMware. Even the network can be tweaked for efficiency in a particular cloud environment once we know exactly how that cloud works under the covers. And we do that tweaking in the standard images so EVERYBODY benefits and you can take it for granted – if you’re using Ubuntu, it’s optimal.
The results of this work can be pretty astonishing. In the case of one cloud we reduced the Ubuntu startup time by 23x from what their team had done internally; not that they were ineffective, it’s just that we see things through the eyes of a large-scale cloud user and care about things that a single developer might not care about as much. When you’re doing something at scale, even small efficiencies add up to big numbers.Standard, yummy
Before we had this program in place, every cloud vendor hacked their own Ubuntu images, and they were all slightly different in unpredictable ways. We all have our own favourite way of doing things, so if every cloud has a lead engineer who rigged the default Ubuntu the way they like it, end users have to figure out the differences the hard way, stubbing their toes on them. In some cases they had default user accounts with different behaviour, in others they had different default packages installed. EMACS, Vi, nginx, the usual tweaks. In a couple of cases there were problems with updates or security, and we realised that Ubuntu users would be much better off if we took responsibility for this and ensured that the name is an assurance of standard behaviour and quality across all clouds.
So now we have that, and if you see Ubuntu on a public cloud you can be sure it’s done to that standard, and we’re responsible. If it isn’t, please let us know and we’ll fix it for you.
That means that you can try out a new cloud really easily – your stuff should work exactly the same way with those images, and differences between the clouds will have been considered and abstracted in the base OS. We’ll have tweaked the network, kernel, storage, update mechanisms and a host of other details so that you don’t have to, we’ll have installed appropriate tools for that specific cloud, and we’ll have lined things up so that to the best of our ability none of those changes will break your apps, or updates. If you haven’t recently tried a new cloud, go ahead and kick the tires on the base Ubuntu images in two or three of them. They should all Just Work TM.
It’s frankly a lot of fun for us to work with the cloud operators – this is the frontline of large-scale systems engineering, and the guys driving architecture at public cloud providers are innovating like crazy but doing so in a highly competitive and operationally demanding environment. Our job in this case is to make sure that end-users don’t have to worry about how the base OS is tuned – it’s already tuned for them. We’re taking that to the next level in many cases by optimising workloads as well, in the form of Juju charms, so you can get whole clusters or scaled-out services that are tuned for each cloud as well. The goal is that you can create a cloud account and have complex scale-out infrastructure up and running in a few minutes. Devops, distilled.
Returning to Randa is always tremendous, no matter what the work is. The house in Randa, Switzerland is so welcoming, so solid, yet modern and comfortable too.
KDE is in the building mode this year. If you think of the software like a house, Frameworks is the foundation and framing. Of course it isn't like a house, since the dwelling we are constructing is made from familiar materials, but remade in a interactive, modular fashion. I'd better stop with the house metaphor before I take it too far, because the rooms (applications) will be familiar, yet updated. And we can't call the Plasma Next "paint", yet the face KDE will be showing the world in our software will be familiar yet completely updated as well.
However, this year will see the foundation for KDE's newest surge ahead in software, from the foundation to the roof, to use the house metaphor one more time. I really cannot wait to get there and start work on our next book, the Framework Recipe book (working title). Of course, travel is expensive, and most of us will come from all over the world. So once again, we're raising funds for the Randa Meetings, which will be the largest so far. The e.V. is a major sponsor, but this is one big gathering. We need community support. Please give generously here:
Somewhat a response to
NOTE: These are more personal opinions and not necessarily those of my employer, your employer, or any of the businesses and government in my town,city, state, country.DRM
It is *never* something people want. Have you ever heard someone say “I really want this content I bought/”rented” to be harder to share/remix/watch and to have even greater legal ramifications if I do so?” They do want content made by Hollywood, but those are different things.DRM – Mozilla being played?
This reminds me of the time Chrome said it would drop H264 . From what played out in public it seemed that Mozilla didn’t see the need to push for H264 sooner because they trusted Google to actually drop H264 support.
In a somewhat reverse situation, Mozilla just said they will adopt EME in Firefox before any of the possible benefits are realized by others. Right now it’s being implemented only in Chrome and IE 11 , and I’ve only seen it used in Chrome OS and IE 11. From my point of view I would have preferred if Mozilla had at least waiting to see if we will get more platform support from major vendors on this. (aka Linux support for Netflix)
If so, maybe the increase in Linux market-share would provide some balance to the DRM’s negative affects. Making free software overall a net win. If so, I would (still reluctantly) support Mozilla’s decision here if they saw it as an end to get Hollywood media to more freer platforms. But why not wait and see if this is actually true with Google Chrome/Netflix on Linux?
I would like to see Mozilla pushing Indie/Crowdsourcing media, like:
Paid streaming site for indie videos https://indieflix.com/
Public broadcasting http://www.pbs.org/
Better publishing http://www.getmiro.com/
Basically, How can Mozilla use it’s capabilities to better change the media landscape? (A slightly “better” form of DRM should not be the answer to this question).
It doesn’t work (and almost certainly never will) and it gives people a false sense of doing something. You are giving advertisers another data point to track! It literally does the opposite of what is supposed to.
- Finish blocking 3rd party cookies (https://blog.mozilla.org/privacy/2013/02/25/firefox-getting-smarter-about-third-party-cookies/)
- Promote (by adding it as a search option, etc) providers that promise to not track ANY of their users. DuckDuckGo being the most obvious example.
Their is so little difference between Yahoo and Bing search.. and DuckDuckGo is a damn good search engine.
- Push advertisers off of Flash (generally a good idea, but also will help with privacy – no flash cookies, etc)
Generally I’m supportive of the Click-to-play, etc initiatives Mozilla has taken thus far. Flash is the exception to that rule. Here’s the outline of a plan to push advertisers off of it. (the numbers are obviously made up for illustrative purposes)
- Start forcing Click-to-play for Flash when the site has more than 6 plugins running (pick some “high” number, and count all plugins, not just flash)
- Reduce the number of plugins to 5, after some number of Firefox releases or some specific Adobe Flash counting metric. Repeat pushing to 4, etc.
- Once advertisers get on board and Flash ads aren’t served by the big advertisers, now we can push Flash to click-to-play at 2 instances per page.
- Once flash usage drops under 5% , we’d be able to push it to default click-to-play for all Flash.
You’ve removed the (easy) option to disable it. When will it go away for good? Why does Chrome let the user see what protocol version (TLS 1.2 vs 1.0, etc) their users are using, but Firefox doesn’t?Mobile – Firefox OS
Well I work at a direct competitor in mobile… but not actually working with our phone product..
- Launching just with Facebook contact sync well, isn’t very open and totally goes against promoting those that respect your same values.
- I get that you can’t magically make the devices more open, but at least can we get public commitments for how long a device will be supported for? And how often it will get Firefox OS updates?
- I wish you had used Thunderbird as Firefox OS’s email client… I think that would let it scale really really well and give you a new reason to push features there.. Maybe you are under the hood?
If you’re reading this and don’t know, you can try out Firefox OS (“Boot2Gecko”) on your desktop too! (https://nightly.mozilla.org/)End on a positive note..
I love the new Firefox interface. It’s awesome and makes customizing the browser much better. I’m a nightly user and teach courses on Firefox. I’m not going anywhere (fast) over the DRM decision. Going to keep doing what I do and see how it pans out…
* Command & Conquer
* How-To : Python, LibreOffice, and GRUB2 Pt.1.
* Graphics : Blender and Inkscape.
* Review: Ubuntu 14.04
* Security Q&A
* What Is: CryptoCurrency
* Open Source Design
* NEW! - Arduino
plus: Q&A, Linux Labs, Ask The New Guy, Ubuntu Games, and a competition to win a Humble Bundle!
If you're making hamburgers, buns are needed. You can buy them but they are dead simple to make yourself and (like most homemade bread) they're going to taste far superior to anything that comes off a shelf.
Just in case you wanted my basic (and perfect) hamburger mince recipe to:Hamburger Recipe
- 3 cups all-purpose flour
- 1 teaspoon salt
- 1 (1/4 oz) package active dry yeast
- 1 cup warm water
- 1 large egg
- 3 tablespoons butter, melted
- 3 tablespoons sugar
- 1 tablespoon milk
- olive oil
- 1 egg, beaten
- 1 tablespoon sesame or poppy seeds, for garnish (optional)
4 hours of waiting; 10 minutes of work; Makes 8 buns
- Dissolve the yeast in the warm water with 1/4 of the flour. Let stand until quite foamy.
- Whisk together an egg, with the milk, butter & sugar.
- Dump the remaining flour and salt into a food processor (or stand mixer with a dough hook).
- Add the wet ingredients:
- If using a food processor, slowly pour in the yeast mixture & the egg-milk mixture and blitz until the mixture comes together into a ball.
- If using a stand mixer, add the yeast & egg-milk mixtures and run the machine on low for 5-6 minutes.
- Dump the dough-mass onto a lightly floured surface and form into a ball.
- Coat lightly with olive oil and place in a large bowl. Cover lightly with a towel and let rise until it has doubled in volume –about 2 hours.
- Transfer risen dough to a lightly floured surface, and flatten a bit to remove any large bubbles.
- Form into a large even circle and divide into 8 even pieces. Form each of those into a ball, flatten slightly and place onto a baking sheet lined with parchment paper.
- Lightly drape a sheet of plastic wrap over the buns and let rise again for another hour.
- Preheat the nearest oven to 375 degrees Fahrenheit (190 Celcius).
- Lightly brush all the buns with the beaten egg and sprinkle with sesame or poppy seeds, if using.
- Bake for 15-20 minutes until the exteriors are golden brown.
- Remove & let cool completely. Slice in half crosswise & serve.
This post is part of the series ‘Making ubuntu.com responsive‘.
One of the biggest challenges when making the move to responsive was tackling the navigation in ubuntu.com. This included rethinking not only the main navigation with first, second and third level links, but also a big 3-tier footer and global navigation.
Desktop size main navigation and footer on ubuntu.com.1. Brainstorming
Instead of assigning this task to a single UX designer, and with the availability of everyone in the team very tight, we gathered two designers and two UX designers in a room for a few hours for an intensive brainstorming session. The goal of the meeting was to find an initial solution for how our navigation could work in small screens.
We started by analysing examples of small screen navigation from other sites and discussing how they could work (or not) for ubuntu.com. This was a good way of generating ideas to solve the navigation for our specific case.
Some of the small screen navigation examples we analysed, from left to right: the Guardian, BBC and John Lewis.
We decided to keep to existing common design patterns for small screen navigation rather than trying to think of original solutions, so we stuck with the typical menu icon on the top right with a dropdown on click for the top level links.
Settling on a solution for second and third level navigation was trickier, as we wanted to keep a balance between exposing our content and making the site easy to navigate without the menus getting in the way.
We thought keeping to simple patterns would make it easier for users to understand the mechanics of the navigation quickly, and assumed that in smaller screens users tend to forgive extra clicks if that means simpler and uncluttered navigation.
Some of the ideas we sketched during the brainstorming session.2. Prototyping
With little time on our hands, we decided we’d deliver our solution in paper sketches for a prototype to be created by our developers. The starting point for the styling of the navigation should follow closely that of Ubuntu Insights, and the remaining tweaks should be built and designed directly in code.
Navigation of Ubuntu Insights on large and small screens.
We briefed Ant with the sketches and some design and UX direction and he quickly built a prototype of the main navigation and footer for us to test and further improve.
First navigation prototype.3. Improving
We gathered again to test and review the prototype once it was ready, and suggest improvements.
Everyone agreed that the mechanics of the navigation were right, and that visual tweaks could make it easier to understand, providing the user with more cues about the hierarchy of the content.
Initially, when scaling down the screen the search and navigation overlapped a small amount before the small screen menu icon kicked in, so we also thought it would be nice to animate the change of the amount of padding between widths.
Final navigation prototype, after some tweaks.Some final thoughts
When time is of essence, but you still want to be able to experiment and generate as many ideas as possible, spending a couple of hours in a room with other team members, talking through case studies and how they can be applied to your situation proved a really useful and quick way to advance the project. And time and time again, it has proved very useful to invite people who are not directly involved with the project to give the team valuable new ideas and insights.
We’re now planning to test the navigation in our next quarterly set of usability testing, which will surely provide us with useful feedback to make the website easier to navigate across all screen sizes.Reading list
- New Layouts for the Multi-Device Web
- New Rule: Every Desktop Design Has To Go Finger-Friendly
- Responsive Navigation: Optimizing for Touch Across Devices
- Responsive Navigation Patterns
- Killing Off the Global Navigation: One Trend to Avoid
- Global navigation is less useful on large, complex websites
- Implementing Off-Canvas Navigation For A Responsive Website