IETF
irtf
irtf@jabber.ietf.org
Saturday, 28 July 2012< ^ >
Room Configuration

GMT+0
[13:14:20] JohnLeslie joins the room
[13:59:01] JohnLeslie leaves the room
[14:04:13] JohnLeslie joins the room
[14:51:39] tuexen joins the room
[14:57:02] Cullen Jennings joins the room
[15:01:52] <JohnLeslie> Hi Cullen... do you know when they plan to debug remote participation?
[15:06:57] Mat Ford joins the room
[15:09:24] <Cullen Jennings> hi
[15:09:39] <Cullen Jennings> Mary is getting slides up
[15:09:50] <Cullen Jennings> then in a few minutes, we will bring webex
[15:11:47] <tuexen> Will the slides be available for download?
[15:14:25] <Cullen Jennings> yes - will get you link in a mintue
[15:15:01] <tuexen> great, thanks a lot
[15:25:53] <Cullen Jennings> Slides can be found at http://www.iab.org/activities/workshops/cc-workshop/slides/
[15:28:03] coopdanger joins the room
[15:32:50] Mark Handley joins the room
[15:34:13] <tuexen> I have problems getting the Welcome and Logistics slides... Wrong link?
[15:36:47] <Mat Ford> wrong link - working on it
[15:39:47] jg joins the room
[15:40:00] jg leaves the room
[15:40:32] Jim Gettys joins the room
[15:41:01] Jim Gettys leaves the room
[15:41:02] gettys joins the room
[15:51:00] <tuexen> The link works now. Thanks for fixing.
[16:02:28] <Mat Ford> we're about to get started
[16:03:53] Lars joins the room
[16:04:15] Simon Perreault joins the room
[16:04:22] marc.blanchet.qc joins the room
[16:04:23] hta joins the room
[16:04:32] jesup joins the room
[16:04:51] mreavy joins the room
[16:04:55] Magnus joins the room
[16:05:55] spencerdawkins joins the room
[16:07:24] <spencerdawkins> So, we're all here, right? :D
[16:07:44] <tuexen> I guess so
[16:08:08] tterribe joins the room
[16:08:09] <Mat Ford> mark handley about to get started (audio woes notwithstanding)
[16:08:24] csp joins the room
[16:08:37] <Cullen Jennings> Can't hear a word he is saying
[16:08:46] <Mat Ford> mark you need to turn off your video
[16:08:56] <Mat Ford> we can't hear you at all clearly
[16:09:16] <Mat Ford> Mark's slides: http://www.iab.org//wp-content/IAB-uploads/2012/07/2-iab-cc-workshop.pdf
[16:10:45] <Mark Handley> please mute the e ike
[16:10:49] <Mark Handley> mike
[16:10:52] Gonzalo joins the room
[16:10:54] <jesup> Anyone remote who's not talking, please mute
[16:11:00] Ted Hardie joins the room
[16:11:03] <jesup> yes
[16:11:04] <Cullen Jennings> yes
[16:11:05] <Mat Ford> so far so good
[16:11:06] <Lars> works
[16:16:37] xiaoqing.zhu joins the room
[16:16:54] bob.briscoe joins the room
[16:18:54] EKR joins the room
[16:20:00] <Mat Ford> i think we lost you mark
[16:20:03] <jesup> dropped
[16:20:25] <jesup> you dropped at the point when you said we ended up on the x=y line on average
[16:20:30] <Lars> the network connection in the room got stuck
[16:20:52] <tuexen> Works over webex for me
[16:20:59] <JohnLeslie> +1
[16:21:00] <Lars> the skyoe cakk dropped
[16:21:40] <Cullen Jennings> Looks like we lost the wired network in the room
[16:21:41] <Cullen Jennings> working on it
[16:21:58] Mark Handley leaves the room
[16:22:16] <Lars> the wired hotel network died
[16:22:33] <Cullen Jennings> is Mark in the jabber room ?
[16:22:34] <Lars> we can hear you again mark
[16:27:36] <bob.briscoe> Mark means New Reno TCP - probably small and shrinking proportion of traffic these days (given uTP, Cubic and a bit of Compound)
[16:30:10] <Cullen Jennings> There's some comments in room about relation of frames and packets but I not clear how to have that discussion with this setup so guesses we will ignore it for now
[16:30:27] <EKR > mary, bernard: how do we ask questions?
[16:31:56] <Mat Ford> ekr: not clear that avoiding starvation is a goal
[16:32:14] <Ted Hardie> As Andrew McGregor notes: "There's no good reason to be TCP-friendly, because it won't return the favor".
[16:32:30] michael.welzl joins the room
[16:33:05] <JohnLeslie> Us remote folks have no idea what EKR is saying...
[16:33:31] <marc.blanchet.qc> us local folks have no idea what EKR is saying...
[16:33:43] <JohnLeslie> ;^)
[16:33:44] <Mat Ford> :D
[16:33:47] <EKR > well, marc, I can't help that.
[16:33:53] <tuexen> Ack: Audio from the room is much worse than audio from Mark
[16:33:54] <marc.blanchet.qc> ;-)
[16:34:26] <spencerdawkins> I was actually following that conversation ... I've come a long way ...
[16:34:37] <Cullen Jennings> The remote people are getting audio from webex, Mark is getting rooom audio from skype - the skype and webex mic are about 8 feet apart
[16:34:43] <Cullen Jennings> so either way we loose
[16:34:46] <EKR > Anyway, what I said was that it's not obvious that you don't want to starve TCP flows. Look, say I have a network that has bandwidth X and I want to do a video call that looks like crap at any sending rate of .9X and a TCP flow that basically is unusable at any rate less than .5X, then something has to give.
[16:35:05] <EKR > There's not a protocol congestion control level solution to this because it's a policy issue
[16:36:02] <spencerdawkins> EKR, I put that in my notes - thank you for typing it exactly ...
[16:36:12] <bob.briscoe> But there is a message that we don't want things like fair queuing in edge routers /forcing/ equal rates
[16:37:10] <EKR > bob: sure.
[16:37:29] <mreavy> I agree with EKR that we need to re-examine some of our common/basic assumptions about starvation and fairness
[16:37:32] <Cullen Jennings> is there more info about how common AQM is vs tail drop
[16:37:40] <jesup> 30fps (or 25fps), rock-steady, is the only way to fly for video.
[16:37:52] <hta> bob, I think anyone forcing equal rates has to answer the question of "between what" carefully. For a cable provider, equal rates between customers (of the same class) may make sense; forcing equal rates between one customer's apps is silly.
[16:38:00] <jesup> Or higher
[16:39:20] <Cullen Jennings> don't think I agreed with info about interactive streaming over TCP exactly but not sure how to discuss that
[16:39:22] <Ted Hardie> User based (or attached devices based) fairness has been deployed in some places with very constrained uplinks. It works at the policy level—if you tell *people* that's what's happening, they can adjust their behavior. But, as ekr notes, this is policy, not protocol.
[16:39:31] <bob.briscoe> I meant home router, not edge router (assuming one contractual user). There are other reasons against fair queuing in shared queues further into the network
[16:40:08] <bob.briscoe> "other reasons" = there are good reasons to want give & take over time, that FQ prevents
[16:41:02] <bob.briscoe> Ted, agree user-based fairness can work for clueful users if they know about it. But not fo the less clueful.
[16:41:24] Ingo joins the room
[16:41:44] <EKR > I would observe that the problem of fair *scheduling* of processes is something we are still struggling with and the only two solutions that seem to work are (1) buy more hardware or (2) kill whatever is chewing up all the CPU. And that's a case where the kernel has vastly more information about what you are doing than either loss or delay-based congestion signals have
[16:42:12] <hta> Not probably wrong. Certainly wrong.
[16:42:57] <hta> B frames are useless for interactive video.
[16:42:57] <Cullen Jennings> so mpeg2 was never desinged for interactive
[16:43:20] <jesup> There's a hidden assumption in Mark's comment that fixed quality is a very important goal
[16:43:21] <Ted Hardie> And note that none of this is 1 frame = 1500 bytes, which makes the math of one of the early slides pretty obviously off.
[16:43:44] <hta> Jesup, not hidden; note the slide on user perception of variable quality.
[16:43:56] <EKR > Here's a thought experiment: ignore the question of *how* you implement it. Design an algorithm that takes a set of flows with time-varying offered loads and outputs how much bandwidth they *should* consume at any given time.
[16:44:30] <Cullen Jennings> Does someone have the link for the webex ?
[16:44:31] <hta> EKR, with or without being able to predict the future?
[16:44:33] <jesup> hta: I would (to a degree) challenge that assumption (see above my comment on frame rates)
[16:44:49] <bob.briscoe> Eric, we designed a policer that allows r-t to take & TCP to give: search "Policing Freedom to use the Internet Resource Pool"
[16:44:54] <tuexen> https://workgreen.webex.com/workgreen/j.php?ED=196058092&UID=1295744272&PW=NZmRlOGM1NGYz&RT=MiM0
[16:45:08] <Cullen Jennings> thanks
[16:45:18] <EKR > hta: well, obviously we eventually need to not have the future eventually.
[16:45:23] <EKR > but if it makes the problem easier now
[16:45:26] <gettys> but most links are shared; what is the linkspeed is variable.
[16:45:41] <Ted Hardie> Yes, the cited experiement focused on frame rate only; if there were resolution differences or other quality differences among the frames instead of frame rate differences, the results might be significantly different
[16:45:42] Stefan Holmer joins the room
[16:46:02] <jesup> ted: exactly
[16:46:10] <EKR > bob: I don't think this is quite the same question I am asking
[16:47:44] <bob.briscoe> EKR, in what sense? Certainly that paper is about policing between 'customers' not within one 'customer's' flows
[16:47:45] benoit.claise joins the room
[16:48:04] <hta> Jesup, I thought you were supporting his point when you said "30 fps rock steady is the only way to go".
[16:48:14] <EKR > Well, I think that's too narrow a problem, for starters.
[16:48:14] <jesup> RTCP can be sent fast "enough" in AVPF, depending on a bunch of parameters
[16:48:35] <bob.briscoe> EKR, but it allows one customer to give & take within their flows
[16:48:37] <EKR > The question I'm interested in involves making *judgements* between different kinds of flows, not just trying to divide resources evenly
[16:48:42] <EKR > "allows"
[16:48:52] <EKR > The question is how they do that
[16:49:06] <EKR > In some better way than "turn off video"
[16:49:21] <jesup> hta: I agree with his research, but his comments and other assumed that a primary adaptation is in frame rate; I prefer to adapt on quality and resolution and keep frame rate up (in keeping with his research)
[16:49:51] <Stefan Holmer> if the receiver knows when there is congestion, the RTCP packet must only be sent quickly at the point of the detection of congestion.
[16:49:52] <michael.welzl> about rtcp feedback: i agree with updating the standard if that's needed. it's perfectly acceptable to run rtp over tcp and get tons of tcp acks for the purpose of congestion control. why is it not acceptable to send more feedback when i run rtp over udp?
[16:50:01] <jesup> We also only know how to do it well if we ignore delay
[16:50:16] <bob.briscoe> EKR, It does it by the natural unresponsiveness of the r-t transport relative to the TCP (or the r-t app can do this deliberately). But it puts an envelope over the total amount of congestion one customer can cause to another (from all their apps).
[16:50:35] <michael.welzl> i understand the point of the limitation though, as a general limitation for feedback when running over whatever. but congestion control feedback is a different case
[16:50:52] <jesup> stefan++
[16:50:53] <csp> @michael: you can tune RTCP parameters to get feedback as fast as you like. it's just the default that is low rate feedback
[16:51:01] cheshire joins the room
[16:51:01] <EKR > bob: I think you're misunderstanding my point.
[16:51:19] <EKR > Again, I'm not talking about implementation.
[16:51:24] <Lars> questions for mark?
[16:52:18] <bob.briscoe> EKR, what do you think I'm talking about implementation of? Pls explain what you are asking.
[16:52:29] <michael.welzl> @csp: ok, good then. so why do people talk about it as a problem? are these parameter variations not known enough? what i saw in some rfc is that, if you ask for faster feedback, you must compensate some other time so that *on average* the feedback is not more than the rules say
[16:52:41] <Ted Hardie> Additional variable: the MPTCP mechanism is different from standard TCP. Not a lot of flows using it now, but worth thinking about in some contexts.
[16:53:07] <EKR > I already did, I thought "Here's a thought experiment: ignore the question of *how* you implement it. Design an algorithm that takes a set of flows with time-varying offered loads and outputs how much bandwidth they *should* consume at any given time."
[16:53:57] <csp> @michael: that's avpf, which lets you send earlier than your configured rate, provided you keep the average inter-packet interval. you can change the configure rate too
[16:54:45] <michael.welzl> @csp: ok, but then why are people concerned about the feedback limitation? if that can be completely configured away, we have no problem! (good!)
[16:55:56] <Mat Ford> is the webex audio OK?
[16:56:05] <csp> @michael: if you configure for rapid feedback, you can end up with more RTCP than media. some might think that a problem. otherwise, I don't think RTCP feedback rate is a real issue.
[16:56:35] <tuexen> From the IETF: pretty hard to understand. From Mark: great
[16:56:51] <Ingo> Mark's voice is very good. The audio from the conf room is MOS ~1.5
[16:58:05] <bob.briscoe> EKR, Where are you proposing this algo should run? Host? Home gateway? It makes a big diff?
[16:58:40] <Lars> sorry about the poor audio from the room. we may try and think during the break if we can improve that somehow. for the keynote, we focused on getting mark piped into the room in good quality.
[16:59:14] <Mat Ford> we lost you mark - but no further questions
[17:00:08] <EKR > Bob: that's actually what I'm trying to get away from. I'm not trying to design a solution. I'm trying to ask what resource allocation we are trying to achieve globally. What economists call the social welfare function. The question of what mechanisms we use to approximate is different.
[17:00:15] <EKR > s/approximate/approximate that/
[17:00:48] <tuexen> I think the webex lost connection to mark and the room
[17:01:12] <tuexen> mark is on the webex
[17:01:21] <Ingo> Me too
[17:03:59] <cheshire> Mark Handley pointed out that on Wi-Fi small packets are almost as costly to send as large packets. A similar issue is that Wi-Fi can batch groups of packets together, and sending the whole batch is only a little more expensive than sending a single packet. So cutting a video frame down from two packets to one may in fact have virtually no impact on the amount of the shared resource (wireless spectrum) being consumed.
[17:04:03] <bob.briscoe> EKR, OK, then there is a proposed answer (Kelly, Varian etc said this at the IAB plenary on congestion): use congestion-volume (bytes of drops) of the flow as the 'cost' to to balance against the user's value.
[17:04:36] <EKR > Again, I think you're talking about implementation.
[17:04:57] <EKR > If everyone is sending at *precisely* the right rate there will be *no* drops.
[17:05:17] <EKR > Obviously, that's not going to happen, but that's the diference between the SWF and mechanism
[17:05:22] keithw@mit.edu/barnowl joins the room
[17:05:25] luke@faraone.cc/barnowl53E629EE joins the room
[17:05:30] <bob.briscoe> EKR, eh? I've just talked about a metric, not implementation. If there are no drops, then no-one wants any more than they have already got.
[17:05:48] luke@faraone.cc/barnowl53E629EE leaves the room
[17:05:49] <EKR > No that's not true.
[17:05:54] <bob.briscoe> Why not/
[17:05:59] <JohnLeslie> WebEx is getting no audio from the room -- is anything being said in the room?
[17:06:29] <cheshire> @Bob Delay-based flow control may reduce rate *before* drops happen.
[17:06:45] <EKR > As I said, if your hypo is that people are going to be sending at *exactly* the rate that fills the channel.
[17:06:53] <cheshire> @Bob So no drops ≠ no desire for more throughput
[17:07:04] <bob.briscoe> Stuart, But if I want more, I will not respond to the delay you cause.
[17:07:34] <EKR > Look, think about congestion on roads.
[17:07:38] <cheshire> Also ECN could cause sender to reduce rate
[17:07:44] <EKR > You can still have traffic jams even if nobody is being pushed off the road
[17:07:54] <Lars> is the webex audio working? do you see slides now?
[17:07:57] <tuexen> I do see slides on the WebEx, but so audio?
[17:08:02] <bob.briscoe> Yes, drop or ECN
[17:08:07] <tuexen> ... no Audio on Webex
[17:08:42] <Mat Ford> working on it
[17:08:43] keithw joins the room
[17:08:49] keithw@mit.edu/barnowl leaves the room
[17:09:04] <bob.briscoe> Stu, ECN allows us to avoid the delay-before-drop dilemma, because delay is too fuzzy to use as a good 'fairness' metric
[17:09:30] <hta> Bob, delay is not fuzzy. The relationship between delay and fairness is.
[17:10:01] <Lars> i dialed the polycom into webex. is there audio now?
[17:10:05] billvs joins the room
[17:10:05] <tuexen> Audio is there again.
[17:10:08] <JohnLeslie> I heard Mary
[17:10:09] <Lars> ok
[17:10:15] <tuexen> Thanks
[17:10:58] <bob.briscoe> hta, delay isn't fuzzy /per se/, it's just hard to measure, esp. for someone trying to be independent of the protagonists (e.g. the queue)
[17:11:31] <keithw> I don't follow -- if you're the FIFO queue, measuring delay in-queue is easy to measure.
[17:11:35] Mark Handley joins the room
[17:11:47] <cheshire> @Bob I was just responding to this: "If there are no drops, then no-one wants any more than they have already got." Just because I have a delay-aware or ECN-aware client doesn't mean I don't *want* any more throughput. It means my client is being nice to the network.
[17:11:48] <keithw> What's hard is for the endpoints to measure delay in-queue if there is also uncertain (or varying) propagation delay.
[17:12:07] <bob.briscoe> Yup
[17:12:49] <bob.briscoe> The fuzziness is in ends & middle agreeing on a metric that can be used to arbitrate their dispute
[17:13:21] <Ingo> An additional complication with delay measurements may be stuff such as DRX (discontinous reception) used in LTE, this is used to save battery in cell phones
[17:13:42] <bob.briscoe> See Wes's paper about problems with delay-measurement techniques
[17:13:44] cabo joins the room
[17:14:32] <Ingo> But... In a bloated network with poor queue management I guess delay measurement is much better than nothing..
[17:16:08] <bob.briscoe> Ingo, If we're going to burn brain cycles solving this problem, and effort getting somethign deployed, which usually takes years, lets not start on something that is built on sand.
[17:16:10] <marc.blanchet.qc> when ekr is talking, I have a congestion control problem in my head. ;-)
[17:16:42] <hta> When there are timestamps in the material being sent (like in RTP), delay changes are trivial to measure. Absolute 1-way delay is nearly impossible, but doesn't have much relevant information in it.
[17:16:43] <bob.briscoe> It's the codec ;)
[17:16:45] Mat Ford leaves the room
[17:17:30] Mat Ford joins the room
[17:17:33] Ted Hardie leaves the room
[17:17:49] Ted Hardie joins the room
[17:17:58] <bob.briscoe> But the delay changes RTP sees are only partially related to queuing delay when link rates are varying all the time (e.g. due to the excellent wireless drives seeking best rate by hopping)
[17:18:03] bob.briscoe leaves the room
[17:18:57] <Ingo> @Bob, I agree to some extent, one should probably not overwork a delay based algo but while waiting for the world to get bloatfree (esp the wireless world) something is needed
[17:19:22] <cheshire> "Absolute one-way delay" is a virtually meaningless concept. And I don't know how you distinguish change in one-way delay from clock drift between sender and receiver.
[17:21:23] <keithw> It is certainly a well-DEFINED concept. And if you have GPS time synchronization at both ends (or both ends are network interfaces to the same computer with the same clock), it is possible to measure.
[17:22:08] Gonzalo leaves the room
[17:32:07] <billvs> regarding one-way delay ----- What are the relative timescales of the variations in delay due to wireless issues and bufferbloat? I inow that the bufferbloat related delay can range from a few ms to an arbitrarliy large figure. If we know that the wireless delay variablity is always less than N milliseconds, we have a starting point for a design.......
[17:34:10] bob.briscoe joins the room
[17:35:49] <Lars> audio ok, slides ok?
[17:35:55] <Ingo> Both OK
[17:35:58] <tuexen> Both good.
[17:36:02] <Lars> ok
[17:37:42] <Ingo> :-)
[17:38:44] <Lars> if you are on webex audio, please mute
[17:38:51] <Lars> we get echo over the polycom
[17:38:56] <tuexen> audio has a huge echo
[17:39:07] <Cullen Jennings> someone one webex needs to mute
[17:39:14] <Lars> mary is muting you all
[17:39:26] <tuexen> Now I don't here anything
[17:39:35] <tuexen> now its back
[17:39:36] <Lars> mary muted us too :-) should be unmuted now
[17:40:37] <Cullen Jennings> you you hear on webex now ?
[17:40:43] <tuexen> OK
[17:40:45] <Ingo> Yes
[17:41:41] <michael.welzl> @ekr: you said something at the mic after my presentation which i didn't quite get acoustically - would you like to state it again here, so i can answer it?
[17:43:24] <keithw> billvs: I think the two "kinds" of delay (bufferbloat and wireless) are interrelated on these LTE- and EVDO-style networks.
[17:43:40] <michael.welzl> btw apologies to y'all for having given a horrible presentation - that was way below my normal standards... i was caught by surprise, and i agree with harald that the slides were confusing, i made them in a rush as i travelled here and didn't notice that problem when copying stuff together. sorry! anyway, the point came i across i think.
[17:43:43] <keithw> If you send one datagram, but the receiver is in the middle of a a 5-second radio outage, they will get it five seconds later.
[17:45:12] xiaoqing.zhu leaves the room
[17:50:29] <Mat Ford> now on 'Impact of TCP - Paper 9'
[17:53:59] <Lars> can the remote people hear bob when he speaks form the back of the room?
[17:54:09] <tuexen> yes
[17:54:20] <Lars> wow polycom mike better then i thought
[17:54:30] <Mat Ford> Paper 25
[17:54:54] <spencerdawkins> mat, thank you for your help on flagging papers - they aren't in agenda order! :-)
[17:57:10] <marc.blanchet.qc> even the uploaded ppt is not exactly the same as the one presenting. (no criticism intended, just a warning for people: use webex if you really want to follow the slides as they are presented)
[17:58:05] <Mat Ford> Paper 11
[17:59:44] benoit.claise leaves the room
[18:00:06] <michael.welzl> just as an extra data point, we've carried out similar tests to the ones in paper 11 a long time ago, and saw similar problems: http://heim.ifi.uio.no/~michawe/research/publications/iscc05.pdf [http://heim.ifi.uio.no/~michawe/research/publications/iscc05.pdf] actually there are many more unpublished results (student theses), looking at games etc. in a similar way - all the time showing such trouble
[18:00:36] <michael.welzl> one thing we saw very often is that applications increased their packet size in response to congestion! i remember that several games did that
[18:01:05] <tuexen> and lower the packet rate?
[18:01:17] <michael.welzl> nah :-)
[18:01:32] <michael.welzl> increase the bitrate by sending just as much, but larger packets
[18:01:50] <tuexen> Too bad.
[18:02:30] <Ingo> I've seen that for instance skype lowers the packet rate as a reaction to congestion, in addition the codec rate is also dropped. At least applies to audio
[18:02:42] <michael.welzl> but all of that is quite many years ago, in the rough 2003-2005 time frame, so not sure how valid that still is
[18:03:09] <Simon Perreault> games can't vary codec settings...
[18:03:16] <Ingo> My data from 2011 but things may have changed since
[18:04:00] <Ingo> @Simon, probably right, have not looked at it.
[18:04:20] <Simon Perreault> hard to tell without looking at the source code
[18:04:41] <Simon Perreault> my guess is they have a fixed amount of data to transfer, which would explain why congestion results in larger packets
[18:06:08] <Mat Ford> paper 30
[18:07:16] <hta> TCP will send all the data it has in the output buffer when the cwnd opens up, I think. If the game writes to the output buffer in small chunks, larger packets on congestion is completely expected.
[18:07:34] <Simon Perreault> hta: yup, that's my thinking as well
[18:07:36] <michael.welzl> that was all UDP
[18:08:01] <hta> michael, would apply for TCP-over-UDP type solutions as well.
[18:08:24] <Simon Perreault> ah, then the game would do somehting TCP-like in UDP with a buffer of unacknowledged data
[18:08:54] <bob.briscoe> Cullen, AIAD can still converge to a central point (like Mark Handley's AIMD animation), as long as the additive increase is per RTT and the additive decrease is per nack (drop or ECN). Because as congestion rises, the nacks per RTT increases.
[18:09:16] <michael.welzl> ah, yes, buffer management... that makes sense
[18:10:18] Stefan Holmer leaves the room
[18:10:19] Simon Perreault leaves the room
[18:10:26] hta leaves the room
[18:10:26] Lars leaves the room
[18:10:29] marc.blanchet.qc leaves the room
[18:10:34] spencerdawkins leaves the room
[18:11:15] <Ingo> @Bob The ECN based rate adaptation with results outlined in paper 4 does something like this,
[18:11:20] g.white joins the room
[18:11:48] spencerdawkins joins the room
[18:12:02] hta joins the room
[18:12:11] marc.blanchet.qc joins the room
[18:12:21] <Ingo> I was stuck with a set of fixed bitrates in the experiment setup though so it was not quite like AIAD.
[18:13:09] spencerdawkins leaves the room
[18:13:20] spencerdawkins joins the room
[18:13:21] Cullen Jennings leaves the room
[18:13:33] Cullen Jennings joins the room
[18:14:03] EKR leaves the room
[18:14:42] <spencerdawkins> did we all bounce? I sure did ...
[18:15:15] <tuexen> Some of you did, not all
[18:15:37] Magnus leaves the room
[18:17:09] Stefan Holmer joins the room
[18:18:14] <spencerdawkins> this is paper 6 ...
[18:18:15] <Cullen Jennings> @bob - agree. What we did was take a link and run video and some other stuff on it and then speed up the link and see what curve was for video to restabalize, then we reduced speed of link and waited to see how long for video speed to restabalize. We did not run enough data to be super conclusive but it was pretty clear from messing around with the apps that they were steping the speed down slowly and not in an aggressive divide by two sort of way
[18:19:33] <csp> can reduce FEC latency by running across flows, if you have multiple flows in parallel, right?
[18:19:50] Simon Perreault joins the room
[18:22:06] spencerdawkins leaves the room
[18:22:20] <billvs> to csp --- yes, but that clealry impacts the protection ratio of each flow.
[18:22:42] spencerdawkins joins the room
[18:23:10] <billvs> the ratio of protection period, BW and the amount of resiliency is a linear relationship, and you don;t get somthing for nothing
[18:26:08] gettys leaves the room
[18:27:38] cheshire leaves the room
[18:28:38] mreavy leaves the room
[18:28:38] jesup leaves the room
[18:28:48] Mat Ford leaves the room
[18:29:20] jesup joins the room
[18:30:01] mreavy joins the room
[18:30:55] Mat Ford joins the room
[18:31:55] <Mat Ford> Starbucks FTW
[18:32:04] <Simon Perreault> Mat Ford: +1
[18:32:20] Magnus joins the room
[18:36:25] <hta> FWIW, the ietf.1x username is ietf and the password is ietf.
[18:37:21] Simon Perreault leaves the room
[18:37:23] simon.perreault joins the room
[18:37:42] <simon.perreault> hta: still doesn't associate. starbucks ftw indeed.
[18:38:13] <csp> eduroam ;-)
[18:38:18] <hta> Simon, guess I took the free slot - that's what I'm on now.
[18:39:48] <hta> you may have to select PEAP and MSCHAP2 too.
[18:40:01] <hta> (my config from the last IETF just worked)
[18:40:23] <simon.perreault> the problem looks similar on ietf and ietf.x: it just doesn't associate
[18:40:33] <simon.perreault> as if there was a low limit of stations on the ap
[18:41:22] <simon.perreault> and ietf-a is gone as far as i can tell
[18:47:23] <Ted Hardie> To rephrase this, should we be designing the solution for this set of network conditions, or the ones after, or both?
[18:47:47] <Ted Hardie> Both, I assume.
[18:48:16] <Ted Hardie> But worth noting now—building a solution that only applies to the current situation will only result in us having to re-build (hopefully soon)
[18:48:31] <hta> We need something that's "more useful than nothing" on the present network, and "more useful than nothing" on the next generation that we currently imagine.
[18:48:55] <hta> The next generation won't be exactly what we imagine, so rebuilding is just a fact of life.
[18:49:31] <hta> I thin this is the FCC Latency under Load metric: http://transition.fcc.gov/cgb/measuringbroadbandreport/technical_appendix/12TestDescription.pdf
[18:49:52] hta leaves the room
[18:51:49] hta joins the room
[18:52:10] <hta> are we back?
[18:52:15] <simon.perreault> you are
[18:52:32] <spencerdawkins> harald, we saw you leave and re-enter
[18:52:35] <Ingo> Question, does ECN help in this respect. Done some simulation experiments (LTE network) with concurrent TCP elephant+mouse flows it becomes possible to reach quite low latencies, any comments
[18:52:55] <Ted Hardie> I think ECN can't help much if there is a single queue
[18:53:03] <Ingo> low latencies for the mice flows
[18:53:04] <hta> can someone who dares to run java on his browser run the test from http://www.broadband.gov/ and report our latency under load here? .-)
[18:53:15] <keithw> ECN can help if marking is applied when the queue is small, even if the queue is big.
[18:53:20] <keithw> Smaller queues can also help.
[18:54:19] <Ted Hardie> @keithw I couldn't parse your statement. Does "When the queue is "lightly filled" when the queue depth is potentially large" mean what you mean?
[18:54:30] <keithw> Yes, sorry.
[18:54:49] simon.perreault leaves the room
[18:54:53] <Ingo> Thing is that with dropping AQMs one need to ensure enough packets in the queue to create DUPacks, with ECN that is not really needed ?
[18:55:09] <keithw> I mean, you can improve latency under load by (a) dropping packets when the queue grows beyond a small size or (b) marking packets when the queue grows beyond a small size (if the endpoints support ECN).
[18:55:11] Mat Ford leaves the room
[18:55:17] Simon Perreault joins the room
[18:55:32] <hta> I talked to one guy about ECN who made the point that routers should put ECN-capable packets into a different queue from non-ECN-capable packets - so that the people who're not doing ECN don't destroy performance for those who do respect ECN.
[18:55:57] Mat Ford joins the room
[18:55:59] <Stefan Holmer> that would be nice
[18:58:07] <Ted Hardie> I think you still need 4 queues under that scheme—ecn capable, shallow queue; ecn capable, latency tolerant; non-ecn capable , shallow queue; non-ecn capable, latency tolerant
[19:00:31] <hta> Ted, those 4 would be better than 2, sure. I'm not sure how to write the classifier between shallow/latency tolerant that functions without configuration and without new protocols (NSIS/RSVP), though.
[19:01:02] <keithw> I am not sure what the rules would be for this exercise. If we are proposing arbitrary changes to the routers, we could propose that they support XCP marking and say that real-time flows should use XCP.
[19:01:26] <keithw> Real-time applications might very well desire very expressive marking and not just a small number of pre-arranged queues.
[19:02:02] <jesup> killing lip-sync is no a solution.... :-/
[19:02:16] <jesup> s/no/not/
[19:02:31] <g.white> @hardie. why be so friendly to non ecn-capable flows?
[19:02:56] <Ted Hardie> @g.white I'm just an old softy, I guess.
[19:02:58] <Stefan Holmer> keithw, that's true. if we can change the routers the problem is immediately simpler. but assuming we can't affect routers...
[19:03:54] <marc.blanchet.qc> decreasing the "user experience" for subscribers that do not run the right version of the OS that does the right thing with ECN, is often a non-starter in the provider world.
[19:03:56] <Ingo> Perhaps a separate not-ECT and ECT queue may be good, less sure about the additional queues (unless QoS is used of course)
[19:04:50] <keithw> Ok, but my Web browser is ECT-capable. I still would love if Skype did not have head-of-line blocking behind it.
[19:06:08] <Ingo> Make Skype ECT :-)
[19:06:12] <marc.blanchet.qc> agree in principle, but in fact, my comment could be changed from "not run the right OS" to "not run whatever OS/apps/home router/..."
[19:07:12] <keithw> I'm saying that both HTTP and Skype probably set ECT. But if they then end up in the same queue (which HTTP is filled), we're in trouble.
[19:08:02] bob.briscoe leaves the room
[19:08:04] <marc.blanchet.qc> I'm just saying that any solution that needs to be implemented in access providers cannot start with an assumption that some end-users that do not have the right setup will be penalised in their user experience compared to the others. it usually just fall off the table in my experience working with providers.
[19:08:16] <hta> keithw, the theory of ECN is that HTTP would back off before the queue is filled, isn't it? See paper #4 (or whatever it was)
[19:08:57] <keithw> You're right, although that depends how the AQM is configured (i.e. when do you start ECN-marking?).
[19:09:38] <keithw> I guess I would also say that it's not just up to the user -- for downloading, it's the Web site they're downloading from that has to respond to ECN to slow down.
[19:10:14] <marc.blanchet.qc> agree
[19:11:56] <Ingo> The ECN marking in paper #4 is quite simple, I recall that the delay threshold was set to 100ms. There is also a channel dependency function built in, meaning that ECN marking may occur before the 100ms threshold is reached.
[19:12:09] cabo leaves the room
[19:12:10] cabo joins the room
[19:12:22] EKR joins the room
[19:12:23] <Ingo> I mean channel quality dependency
[19:14:50] xiaoqing.zhu joins the room
[19:17:55] <Ted Hardie> The upgrade cost isn't the box cost, it's the support cost.
[19:18:24] EKR leaves the room
[19:19:07] Mat Ford leaves the room
[19:19:14] <marc.blanchet.qc> while it would be great (to have a power to upgrade the CPEs), the reality is that replacing them is very difficult, because it does mean a lot of money for access providers if they pay for it, even more difficult if it is a user bought. and Ipv6 sadly do not really yet move this forward. so the assumption that CPE will be upgraded "easily" is false to me, sadly.
[19:21:16] <hta> the wonderful PR gaffe that cisco did with its "cloud managed CPE equipment" isn't going to help either...
[19:24:02] Magnus leaves the room
[19:27:59] Mat Ford joins the room
[19:28:47] Ted Hardie leaves the room
[19:29:01] Simon Perreault leaves the room
[19:29:09] Stefan Holmer leaves the room
[19:29:17] g.white leaves the room
[19:30:09] EKR joins the room
[19:30:36] <Ingo> Is it lunch?
[19:31:22] <Mat Ford> lunch
[19:31:24] <Mat Ford> yes
[19:31:34] <Mat Ford> back a little after 1300hrs I expect
[19:31:42] Mat Ford leaves the room
[19:33:09] billvs leaves the room
[19:34:02] <Ingo> ok
[19:34:54] keithw leaves the room
[19:35:26] michael.welzl leaves the room
[19:36:06] Stefan Holmer joins the room
[19:36:07] Stefan Holmer leaves the room
[19:36:10] EKR leaves the room
[19:43:39] csp leaves the room
[19:44:46] keithw joins the room
[19:46:09] cabo leaves the room
[19:47:04] Stefan Holmer joins the room
[19:48:27] Mat Ford joins the room
[19:50:26] <marc.blanchet.qc> off-topic: I don't know about you, but I found this hotel lunch above average compared to others similar: tasty, fresh, ...
[19:51:23] <Mat Ford> :1
[19:51:28] <Mat Ford> +1
[19:52:36] JohnLeslie leaves the room
[19:53:27] JohnLeslie joins the room
[19:55:52] g.white joins the room
[20:01:06] simon.perreault joins the room
[20:11:47] mreavy leaves the room
[20:12:17] jesup leaves the room
[20:13:44] jesup joins the room
[20:14:41] mreavy joins the room
[20:14:41] gettys joins the room
[20:15:23] cabo joins the room
[20:16:16] csp joins the room
[20:17:04] gettys leaves the room
[20:17:16] gettys joins the room
[20:19:40] <marc.blanchet.qc> I claim no expertise in transport, but I would push the idea that, given "everything" is wireless these days, the cc solution should (first) work its best under these (wireless) networks related conditions and patterns (such as fast change in link speeds and ...). (while knowing that one don't know if you are on a wireless or not or if a upstream link is wireless).
[20:22:26] Ted Hardie joins the room
[20:29:19] <tuexen> Audio not working right now...
[20:29:41] <Ted Hardie> it's just room chatter now.
[20:29:53] <coopdanger> the conference phone is muted until we start
[20:29:53] <Ted Hardie> I think they may turn it back on when we start up again
[20:30:01] <tuexen> OK
[20:30:47] <Mat Ford> we're getting started
[20:31:13] billvs joins the room
[20:31:25] <tuexen> Audio works...
[20:48:56] dirk.kutscher joins the room
[20:51:09] billvs leaves the room
[20:51:46] billvs joins the room
[21:02:21] <marc.blanchet.qc> I agree with Stuart about "skype works most of the time". it works less good when over wireless in my experience. and pushing my previous chat msg, so wireless networks characteristics shall be considered highly in the input of this work.
[21:04:29] Sean Turner joins the room
[21:04:30] <JohnLeslie> I disagree with Marc -- if Skype "mostly works", we should aim for something where it still "mostly works", not optimize for an even worse network situation. Wireless networking is changing too fast for it to be a good target to aim at.
[21:05:20] <Ted Hardie> But the 3g networks have queue separation, and they use *other* queues for voice, right? So the key question is really can you trigger the behavior at will. 802.11n etc. are a different issue.
[21:05:35] <JohnLeslie> BTW, are any slides being shown in the room?
[21:05:46] <spencerdawkins> nope
[21:05:48] <Ted Hardie> @johnleslie not really
[21:05:59] <Ted Hardie> There's a slide from 25 minutes ago stlll up.
[21:06:06] <simon.perreault> and a blue screen on the right
[21:06:08] <keithw> Cellular network downlinks often have queue separation _by customer_, but all IP packets destined to the same handset generally end up in the same FIFO.
[21:06:13] <JohnLeslie> (I noticed!)
[21:09:26] <Ted Hardie> @keithw have the service option queue management things gone in LTE, then? (I am no longer in that business, so I may have missed that change)
[21:09:53] <Ted Hardie> @keithw, like service option 60, say?
[21:13:04] <keithw> I have no idea, sorry.
[21:13:54] <keithw> I wouldn't know how to tell (a) if it was ratified into the standard (b) if so, if it was deployed on a particular LTE network / eNodeB (c) if so, how you could benefit from it or activate it from IP or a connected device.
[21:14:23] <csp> is it congestion collapse if the data arrives, but is too late to be useful and so is discarded at the receiver?
[21:14:39] Ingo leaves the room
[21:16:01] <JohnLeslie> Congestion Collapse isn't about _when_ data arrives, but whether data arrives while the congestion persists.
[21:17:51] <marc.blanchet.qc> let me rephrase: we should put in the use case scenarios that there is very high probability that somewhere in the path between two peers, there is at least one wireless link (wireless being of various kinds: 3g, LTE, wifi, ...), and therefore we should take into account the known behaviors in these networks as an important "context" to consider for the design. (Some of these behaviors have been presented this morning.)
[21:18:46] <JohnLeslie> Marc: that's better phrasing; but I suspect we don't understand why
[21:18:57] <JohnLeslie> Skype "mostly works".
[21:18:58] <csp> it's about goodput – for real-time, late is not good
[21:19:24] <JohnLeslie> csp: agree, late is bad...
[21:21:22] <marc.blanchet.qc> why? because in my world, "every" end-device is wireless, therefore, most likely to be the sources and/or destinations of these flows to be "congestion controlled".
[21:21:59] <JohnLeslie> Marc: recall, wireless is rife with non-congestion losses...
[21:22:43] <keithw> By contrast, LTE and EV-DO have virtually no IP losses at all (congestive or otherwise).
[21:26:18] cheshire joins the room
[21:43:47] Sean Turner leaves the room
[22:00:34] Ted Hardie leaves the room
[22:03:06] simon.perreault leaves the room
[22:03:08] Simon Perreault joins the room
[22:03:09] Simon Perreault is now known as simon.perreault
[22:03:09] simon.perreault is now known as Simon Perreault
[22:03:42] hta leaves the room
[22:07:20] Simon Perreault is now known as simon.perreault
[22:07:20] simon.perreault is now known as Simon Perreault
[22:09:41] gettys leaves the room
[22:10:47] Mat Ford leaves the room
[22:12:24] Simon Perreault is now known as simon.perreault
[22:12:25] simon.perreault is now known as Simon Perreault
[22:13:12] g.white leaves the room
[22:14:21] hta joins the room
[22:17:25] Simon Perreault is now known as simon.perreault
[22:17:25] simon.perreault is now known as Simon Perreault
[22:20:41] csp leaves the room
[22:22:28] Simon Perreault is now known as simon.perreault
[22:22:28] simon.perreault is now known as Simon Perreault
[22:27:20] EKR joins the room
[22:27:32] Simon Perreault is now known as simon.perreault
[22:27:32] simon.perreault is now known as Simon Perreault
[22:28:11] cabo leaves the room
[22:28:13] cabo joins the room
[22:28:31] <EKR > test
[22:28:36] <Cullen Jennings> thanks
[22:28:38] <Simon Perreault> pong
[22:28:42] <Cullen Jennings> ping
[22:29:54] Mat Ford joins the room
[22:31:54] csp joins the room
[22:32:36] Simon Perreault is now known as simon.perreault
[22:32:36] simon.perreault is now known as Simon Perreault
[22:32:40] Cullen Jennings leaves the room
[22:33:13] Cullen Jennings joins the room
[22:34:40] EKR leaves the room
[22:36:13] xiaoqing.zhu leaves the room
[22:37:39] Simon Perreault is now known as simon.perreault
[22:37:39] simon.perreault is now known as Simon Perreault
[22:39:24] g.white joins the room
[22:42:41] Simon Perreault is now known as simon.perreault
[22:42:41] simon.perreault is now known as Simon Perreault
[22:47:45] Simon Perreault is now known as simon.perreault
[22:47:45] simon.perreault is now known as Simon Perreault
[22:48:22] Simon Perreault is now known as simon.perreault
[22:48:22] simon.perreault is now known as Simon Perreault
[22:48:43] Simon Perreault leaves the room
[22:48:43] simon.perreault joins the room
[22:48:56] simon.perreault is now known as Simon Perreault
[22:48:57] Simon Perreault is now known as simon.perreault
[22:48:57] simon.perreault is now known as Simon Perreault
[22:49:33] Simon Perreault leaves the room
[22:49:41] michael.welzl joins the room
[22:49:54] simon.perreault joins the room
[22:54:44] Lars joins the room
[22:55:00] <Lars> multcp is back
[22:55:48] <spencerdawkins> lars - is that good :-)
[22:55:52] <spencerdawkins> ?
[22:56:35] <Lars> ah, it's for flows originating on different hosts. nevermind.
[23:01:45] Ted Hardie joins the room
[23:04:02] <spencerdawkins> he's asking about the delay IN REACTING, right?
[23:04:11] <keithw> Yes.
[23:04:34] <michael.welzl> is the dinner limited to workshop participants, or is it ok to ask (congestion controlling) friends to join?
[23:04:34] <spencerdawkins> and that's not what she answered, right?
[23:05:16] <keithw> I didn't quite understand her answer.
[23:07:31] <keithw> My question is that if the bottleneck link is running at 3 Mbps, and you're sending at 90% of that, and then suddenly the link goes to 0.3 Mbps (but continues to buffer everything without loss), if you keep sending at the original rate for even 1 second before reacting, you will induce a 10 second delay in the video.
[23:08:49] xiaoqing.zhu joins the room
[23:09:08] <keithw> So reacting on the scale of seconds to a signal seems like way too long if the application is interactive videoconferencing.
[23:09:09] <simon.perreault> is it necessary for the CC algorithm to also work around bufferbloat?
[23:09:57] <simon.perreault> i mean, without bufferbloat you wouldn't have a 10-second delay, you would just have lots of packet loss
[23:12:32] <Cullen Jennings> if the network goes from 3 to .3, you are going to have a huge video artifact one way or the other - at this point you are picking your evil
[23:12:37] <billvs> keithw - In my mind, there are multiple signals at play here, with different timescales. Loss (and/or delay) would be handled at a very fine timescale, and it owuld override any ECN/PCN/external rate estimation algorithms
[23:13:26] <billvs> in other words, you would use an algortihm to try and find the right long term rate, and if it was wrong - the dropped packets would trigger a fast, hard downshift
[23:14:02] <Cullen Jennings> +1 on Mo point we need models that bring all the signalls into the input
[23:14:03] <keithw> Whether it's bufferbloat to have a 0.3 megabyte buffer sort of depends.
[23:14:45] Adium joins the room
[23:14:49] <keithw> If you measure your buffer in megabytes, and the LTE downlink can be as high as 30 Mbps, then provisioning 0.3 megabytes (<100 ms) is not crazy. But it does become a problem when the link speed drops.
[23:14:54] <Cullen Jennings> yes - this is paper 19
[23:16:01] <keithw> If the link speed drops from 3 Mbps to 0.3 Mbps, you definitely are going to have an artifact in that the quality must drop commensurately. Whether you induce a 10 second delay on TOP of that is a separate and additional evil!
[23:17:23] <billvs> Once again, in my mind you need an algorithm that tracks long term trends and sets an idealllized rate that the system tries to hit. It also has a short term algorithm that down-shifts when the long term algorithm is found to be wrong
[23:17:47] <billvs> the down shift is fast and hard - the upshift is more gradual
[23:18:44] <keithw> Sure, that may be -- but I'm not sure what the benefit is from dividing it into two algorithms.
[23:19:55] <billvs> Do you have a counter-proposal? Let's get the options on the table...........
[23:21:30] Lars leaves the room
[23:28:05] bob.briscoe joins the room
[23:29:03] xiaoqing.zhu leaves the room
[23:32:30] EKR joins the room
[23:32:33] <EKR > http://www.amazon.com/RTP-Audio-Video-Internet-paperback/dp/0321833627/ref=sr_1_3?ie=UTF8&qid=1343518324&sr=8-3&keywords=colin+perkins
[23:33:46] <csp> Or, if you don't want to give me money, RFC 1889 section 6.3.1 :-)
[23:34:58] EKR leaves the room
[23:35:53] <spencerdawkins> which paper is this?
[23:36:07] <billvs> 15??
[23:36:19] <spencerdawkins> sure, let's go with that :-)
[23:36:52] EKR joins the room
[23:37:08] xiaoqing.zhu joins the room
[23:38:57] EKR leaves the room
[23:39:17] <michael.welzl> wow, brand new! the amazon page says april 2012 - congratulations colin on your book!
[23:39:45] EKR joins the room
[23:40:02] Magnus joins the room
[23:40:06] <csp> now available in paperback – same content
[23:41:20] <csp> (I have an outline for a 2nd edition, but lacking time to write it...)
[23:57:31] <hta> I see that 9 people have replied "no" to the dinner doodle at http://doodle.com/rkue58thhcuee3rg - Doodle did something to their UI interface that made me reply "no" to something without meaning to, so you might all want to check that you've given the answer you expected...
[23:58:01] <spencerdawkins> I'm also trying to decode "if needed" :D
[23:59:08] <simon.perreault> and two people use doodle so much that they took the time to upload an avatar pic
Powered by ejabberd Powered by Erlang Valid XHTML 1.0 Transitional Valid CSS!