X-Git-Url: https://git.mxchange.org/?p=quix0rs-apt-p2p.git;a=blobdiff_plain;f=TODO;h=c4920231012c317d78831cf09c89482ef6db2c2a;hp=2477d2fc0b851ac8161055d33ebe8a4dc4384e84;hb=8c18aac4790e10c1126108ab4b71b19e148045db;hpb=d63ad7d7b1c9e5567bd28450197ef810dc5c5475 diff --git a/TODO b/TODO index 2477d2f..c492023 100644 --- a/TODO +++ b/TODO @@ -1,15 +1,9 @@ -Add statistics gathering to the peer downloading. +Consider what happens when multiple requests for a file are received. -Statistics are needed of how much has been uploaded, downloaded from -peers, and downloaded from mirrors. - - -Add all cache files to the database. - -All files in the cache should be added to the database, so that they can -be checked to make sure nothing has happened to them. The database would -then need a flag to indicate files that are hashed and available, but -that shouldn't be added to the DHT. +When another request comes in for a file already being downloaded, +the new request should wait for the old one to finish. This should +also be done for multiple requests for peer downloads of files with +the same hash. Packages.diff files need to be considered. @@ -21,45 +15,27 @@ distributions. They need to be dealt with properly by adding them to the tracking done by the AptPackages module. -Retransmit DHT requests before timeout occurs. - -Currently, only a single transmission to a peer is ever attempted. If -that request is lost, a timeout will occur after 20 seconds, the peer -will be declared unreachable and the action will move on to the next -peer. Instead, try to resend the request periodically using exponential -backoff to make sure that lost packets don't delay the action so much. -For example, send the request, wait 2 seconds and send again, wait 4 -seconds and send again, wait 8 seconds (14 seconds have now passed) and -then declare the host unreachable. The same TID should be used in each -retransmission, so receiving multiple responses should not be a problem -as the extra ones will be ignored. - - -PeerManager needs to download large files from multiple peers. - -The PeerManager currently chooses a peer at random from the list of -possible peers, and downloads the entire file from there. This needs to -change if both a) the file is large (more than 512 KB), and b) there are -multiple peers with the file. The PeerManager should then break up the -large file into multiple pieces of size < 512 KB, and then send requests -to multiple peers for these pieces. - -This can cause a problem with hash checking the returned data, as hashes -for the pieces are not known. Any file that fails a hash check should be -downloaded again, with each piece being downloaded from different peers -than it was previously. The peers are shifted by 1, so that if a peers -previously downloaded piece i, it now downloads piece i+1, and the first -piece is downloaded by the previous downloader of the last piece, or -preferably a previously unused peer. As each piece is downloaded the -running hash of the file should be checked to determine the place at -which the file differs from the previous download. - -If the hash check then passes, then the peer who originally provided the -bad piece can be assessed blame for the error. Otherwise, the peer who -originally provided the piece is probably at fault, since he is now -providing a later piece. This doesn't work if the differing piece is the -first piece, in which case it is downloaded from a 3rd peer, with -consensus revealing the misbehaving peer. +Improve the downloaded and uploaded data measurements. + +There are 2 places that this data is measured: for statistics, and for +limiting the upload bandwidth. They both have deficiencies as they +sometimes miss the headers or the requests sent out. The upload +bandwidth calculation only considers the stream in the upload and not +the headers sent, and it also doesn't consider the upload bandwidth +from requesting downloads from peers (though that may be a good thing). +The statistics calculations for downloads include the headers of +downloaded files, but not the requests received from peers for upload +files. The statistics for uploaded data only includes the files sent +and not the headers, and also misses the requests for downloads sent to +other peers. + + +Rehash changed files instead of removing them. + +When the modification time of a file changes but the size does not, +the file could be rehased to verify it is the same instead of +automatically removing it. The DB would have to be modified to return +deferred's for a lot of its functions. Consider storing deltas of packages.