In the paper a token approach is used to put an upper bound on the amount of CPU that any one VM could receive. According to this, a slice with a 10% reservation gets 100 tokens per second, since a token entitles it to run a process for one millisecond. The default share is actually a small reservation, providing the slice with 32 tokens every second,or 3% of the total capacity.They have a fair allocation approach here, do you think by providing different consumption rates for tasks, would it be possible to develop a priority approach using the token method?
"This implies the need for a control mechanism, but globally synchronizing such a mechanism across PlanetLab (i.e., to suspend a slice) is problematic at fine-grained time scales." - Section 4.4.2 in the paper.
The above would generally be more difficult to implement, but why would it be problematic to suspend a VM and swap it out temporarily? If the VMs are designed to pass on certain events (or the hook itself) to a predefined control mechanism, management of inactive VMs should be simpler and, more importantly, improve resource utilization.
Also, do you think the the current memory allocation mechanism could result in severe external fragmentation?
The authors say that the direct method of slice instantiation is pull based. They also say that the advantage of pull based approach is that slices persist after node reinstall. Can you explain how does this happen?
There is statement in the research paper which says "Each PlanetLab node should gracefully degrade in performance as the number of users grows."How can degradation in performance be beneficial,can you elaborate on this?
In the paper a token approach is used to put an upper bound on the amount of CPU that any one VM could receive. According to this, a slice with a 10% reservation gets 100 tokens per
ReplyDeletesecond, since a token entitles it to run a process for one millisecond. The default share is actually a small reservation, providing the slice with 32 tokens every second,or 3% of the total capacity.They have a fair allocation approach here, do you think by providing different consumption rates for tasks, would it be possible to develop a priority approach using the token method?
How does Planetlab trace the disruptive traffic if many Vm's share an IP?
ReplyDeleteDuring running of an experiment on PlanetLab, if there is node failure or network outage, how is fault tolerance achieved ?
ReplyDelete"This implies the need for a control mechanism, but globally synchronizing such a mechanism across PlanetLab (i.e., to suspend a slice) is problematic at fine-grained time scales." - Section 4.4.2 in the paper.
ReplyDeleteThe above would generally be more difficult to implement, but why would it be problematic to suspend a VM and swap it out temporarily? If the VMs are designed to pass on certain events (or the hook itself) to a predefined control mechanism, management of inactive VMs should be simpler and, more importantly, improve resource utilization.
Also, do you think the the current memory allocation mechanism could result in severe external fragmentation?
The authors say that the direct method of slice instantiation is pull based. They also say that the advantage of pull based approach is that slices persist after node reinstall. Can you explain how does this happen?
ReplyDeleteThere is statement in the research paper which says "Each PlanetLab node should gracefully degrade in performance as the number of users grows."How can degradation in performance be beneficial,can you elaborate on this?
ReplyDelete