Is it time to verify Results

Message boards : Number crunching : Is it time to verify Results

To post messages, you must log in.

AuthorMessage
Profile UBT - Halifax--lad
Avatar

Send message
Joined: 17 Sep 05
Posts: 157
Credit: 2,687
RAC: 0
Message 2606 - Posted: 8 Nov 2005, 6:33:17 UTC

Maybe it would be a better idea to have the resultsa verified like the rest of the BOINC projects so that it is crunched 3 or so times, only mentioning this as I was looking at some of my results which had errors on them in the past, no one else has had these WU's so hence no science result at all for any of the WU's I had that failed, unless they are going to be sent out again at some point.

Would it take too much strain on the server if you converted to this process whats the pros and cons of processing a WU 3 times to verify the results for credit??
Join us in Chat (see the forum) Click the Sig


Join UBT
ID: 2606 · Rating: -1 · rate: Rate + / Rate - Report as offensive    Reply Quote
Ethan
Volunteer moderator

Send message
Joined: 22 Aug 05
Posts: 286
Credit: 9,304,700
RAC: 0
Message 2607 - Posted: 8 Nov 2005, 6:41:21 UTC - in response to Message 2606.  

I'm not part of the project, but I'd make the first claim. . . It would cause their resources to be cut by two thirds.

They are able to tell if results are 'real' or falsified without sending the same work unit out several times.

This is an obvious plus for the project since they aren't halving (or more) the computing resources available. . every user's results are significant.


ID: 2607 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
David Baker
Volunteer moderator
Project administrator
Project developer
Project scientist

Send message
Joined: 17 Sep 05
Posts: 705
Credit: 559,847
RAC: 0
Message 2609 - Posted: 8 Nov 2005, 7:48:03 UTC

Ethan is right--as long as the landscapes we are searching are large compared to the available computer power, we are reluctant to carry out redundant runs--we would cut our searching by two or three fold. Lost work units are not so much of a problem since the location each calculation starts searching at is random. Lottery tickets are a good analogy--if you had a ticket, and lost it, you would be equally likely to win with a new ticket. The probability of success depends only on the number of completed work units (lottery tickets not lost), and is not negatively affected by lost work units--so completing a new work unit makes up for having lost an earlier one.

Also, we can tell if results have been tampered with (I'm sure nobody would want to do this!) because we recompute the energies on our local computers of the most promising structures found in all of the searches.
ID: 2609 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Janus

Send message
Joined: 6 Oct 05
Posts: 7
Credit: 1,209
RAC: 0
Message 2617 - Posted: 8 Nov 2005, 10:41:46 UTC
Last modified: 8 Nov 2005, 10:51:25 UTC

When you look at the results is it possible for you to see if they are not likely to be correct by using "adjacent" results?
For instance if I download the source code and optimize it a bit too much so that it always generates energy ratings that are too high compared to the correct result (I assume that in the example you just mentioned the energy rating was too low) - would you be able to tell? Perhaps from some kind of spike in the "landscape" you are searching?

What about redundancy set at 2 results? This seems like a good strategy to me if the answer to the above is "no".

And when looking at the growth rate of this project it's probably not that bad to cut it in 2. Ok, you do have the coolest server equipment, so perhaps that's not an issue =)
ID: 2617 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 2621 - Posted: 8 Nov 2005, 12:49:19 UTC

David can correct me if I am wrong ... but ... if *I* understand this "phase" of the project they are on two kinds of hunt here. We tend to think of a search for an answer as the end product of work processed.

BUT ...

We are not only searching for specific answers, we are searching for the best way of searching ...

SO ...

As he stated, the loss of a specific result, or the contamination, etc. is not of significant importance as trying to obtain the largest possible coverage.

As a contrast, LHC@Home HAS to process *ALL* results to very precise tolerances or the work is not useful.

Here, without redundency, we can cover a much wider space and try more techniques in the searches to see what seems to work. As promised in another post, we should be seeing this week or next (I gave you a schedule slip if you need it ...) some insight into what they have learned in the last few weeks ... (pant, pant, pant ...)
ID: 2621 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Janus

Send message
Joined: 6 Oct 05
Posts: 7
Credit: 1,209
RAC: 0
Message 2622 - Posted: 8 Nov 2005, 12:56:34 UTC

Yup, I'm just worried that I may end up harming the project by trying to contribute more...
ID: 2622 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Scott Brown

Send message
Joined: 19 Sep 05
Posts: 19
Credit: 8,739
RAC: 0
Message 2626 - Posted: 8 Nov 2005, 13:41:54 UTC

But cheating can still be a significant problem. If credits are my only (or main) motivation, then why wouldn't I push the limits in this project. I could (as Janus suggested) artificially generate high energy scores that would never be checked (based on David's post below that only low energy units are reexamined). Add in some optimized routines and, voila, I can generate credit as fast as I desire. The problem here is one that those of us from the old SETI@Home Classic days remember well.

I would also suggest that David's "lottery ticket" analogy isn't quite correct. As is plainly clear in his post, maximizing resources is a premium here. In the analogy of lost lottery tickets, no opportunity costs are considered. Since lottery tickets have monetary expense, the loss is not irrelevant. For the project, David's logic applies only to the individual workunit--if it is lost, another randomly targeted unit will indeed have equal likelihood of obtaining a useful result. The problem with the analogy occurs at the project level. Given finite computing resources, lost workunits cost the project (both indirectly in the sense of donated computing time that is essentially unused and directly in how those lost workunits load the project's infrastructure). Thus, at some compositional threshold, a redundancy factor of 2 (as Janus suggested) actually does complete more useful work than the non-redundant runs. If the lost donated time were the only issue, then the threshold could simply be computed as 50% of the users or hosts. However, the difficulty lies in calculating that threshold given the inherent complexity of computing the infrastructure load. This additional loss necessarily means that the actual threshold lies below 50% of users/hosts (and could perhaps be a very samll percentage). Thus, I would argue, a minimum level of redundancy is the best route.
ID: 2626 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Tern
Avatar

Send message
Joined: 25 Oct 05
Posts: 576
Credit: 4,695,251
RAC: 18
Message 2629 - Posted: 8 Nov 2005, 14:51:16 UTC - in response to Message 2626.  

But cheating can still be a significant problem. If credits are my only (or main) motivation, then why wouldn't I push the limits in this project. I could (as Janus suggested) artificially generate high energy scores that would never be checked (based on David's post below that only low energy units are reexamined). Add in some optimized routines and, voila, I can generate credit as fast as I desire.


I guess the question has to be where this would impact the search, as you say. If a few people, or even a few hundred, turn in "worthless" results just for the credits, that effectively is the same as not checking those WUs at all. If cutting the number of results checked by half, or by two-thirds, due to redundancy, would mean MORE results were never checked (because they never get issued due to too little CPU power), then the impact of "cheating" on the project is minimal. Until/unless it actually becomes an issue, I see no reason to waste that huge a share of the computer power, just to avoid something that might, someday, maybe, be an issue. Just "gut feel" says that it probably never will be a problem.

I see the lack of redundancy as a problem _only_ from the credit standpoint. Because there is no dropping of the highest credit, no averaging, I can effectively ask for any amount of credit I want. Because of the variation in CPUs and in the workunits themselves, the project can't just say "anything over x credits needs to be looked at". (I have a "legitimate" 122-credit WU out there... 31 hours of crunching...) Because there isn't a quorum to compare against, other participants won't notice someone claiming 50 credits/result until they are in the top rankings and someone gets curious and looks.

BOINC V5.x has the SETI-beta "flop counting" code in it. Using that would both eliminate cheating-via-benchmarks, and would be a good example of the "improved" method for other projects to follow. If the Rosetta WUs aren't conducive to using that, and Paul's calibrated-host proposal won't work without redundancy, then I don't know what to suggest, unless it's perhaps issuing SOME results redundantly, at random, and looking at the spread of credits claimed, hoping to catch any problems.

ID: 2629 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile FZB

Send message
Joined: 17 Sep 05
Posts: 84
Credit: 4,948,999
RAC: 0
Message 2640 - Posted: 8 Nov 2005, 16:56:06 UTC

as most people asking for redundant computation are concerned with cheating, one could perhaps implement a random seed in the wu's and the client had to calculate checksum which is then validated in the returned result.
--
Florian
www.domplatz1.de
ID: 2640 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Tern
Avatar

Send message
Joined: 25 Oct 05
Posts: 576
Credit: 4,695,251
RAC: 18
Message 2642 - Posted: 8 Nov 2005, 17:12:30 UTC - in response to Message 2640.  

random seed in the wu's


With open source, cheating-by-changing-code would just leave that part alone. And that wouldn't affect the credit issue. With closed source, something like that would work, at least to prevent someone from writing their own 'dummy' application. In fact, I believe something like that is present for most projects already.

ID: 2642 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
ColdRain~old
Avatar

Send message
Joined: 1 Nov 05
Posts: 27
Credit: 33,378
RAC: 0
Message 2648 - Posted: 8 Nov 2005, 18:48:07 UTC
Last modified: 8 Nov 2005, 18:50:18 UTC

Boinc is open source, the Rosetta client is not. The Rosetta client has some anti-tampering checks built-in if I'm not mistaking.
So, if cheating would at all be possible, it would be cheating with the credits earned, not with the scientific value of the returned results.
I admit, the credits earned and the team competition are a major motivation for many DC-adepts, including "yours sincerely" :) But with Boinc (the system that is responsible for the credits) I haven't seen much cheating threads over the past years. Yes, it is possible to tweak (read compile) the boinc client to the max, but that doesn't hurt the scientific project nor any other compertitors. They can easely do the same. I take pride in the slander name "credit whore" :):) Meanwhile, I know I'm helping a worthwile scientific, and moreover medical tainted project, and after all, THAT's what's driving me!! If it weren't for that goal, I'd be watching TV or spending my time and resources elsewhere :)
ID: 2648 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile UBT - Halifax--lad
Avatar

Send message
Joined: 17 Sep 05
Posts: 157
Credit: 2,687
RAC: 0
Message 2652 - Posted: 8 Nov 2005, 19:38:45 UTC

Rosetta is going open source in the next week or so I think I remember Admin saying
Join us in Chat (see the forum) Click the Sig


Join UBT
ID: 2652 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
ColdRain~old
Avatar

Send message
Joined: 1 Nov 05
Posts: 27
Credit: 33,378
RAC: 0
Message 2660 - Posted: 8 Nov 2005, 20:17:54 UTC - in response to Message 2652.  

Rosetta is going open source in the next week or so I think I remember Admin saying

Must have missed that post. I wonder why they would go open source. It's their source, and the methods used are already available. Through Robetta and other means.
ID: 2660 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile UBT - Halifax--lad
Avatar

Send message
Joined: 17 Sep 05
Posts: 157
Credit: 2,687
RAC: 0
Message 2664 - Posted: 8 Nov 2005, 20:40:39 UTC - in response to Message 2660.  
Last modified: 8 Nov 2005, 20:49:02 UTC

Rosetta is going open source in the next week or so I think I remember Admin saying

Must have missed that post. I wonder why they would go open source. It's their source, and the methods used are already available. Through Robetta and other means.


If I can find the post I think I have seen I will post it here for you unless I made it up in my head give me 1/2hr

Remember where I seen it was over on SETI see 15th meassage down form David Baker:

http://setiathome.berkeley.edu/forum_thread.php?id=21969#186902

plus found a slight mention on Rosetta Forum:

https://boinc.bakerlab.org/rosetta/forum_thread.php?id=301

I knew I wasnt going mad and making it up
Join us in Chat (see the forum) Click the Sig


Join UBT
ID: 2664 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
EclipseHA

Send message
Joined: 3 Nov 05
Posts: 12
Credit: 284,797
RAC: 0
Message 2675 - Posted: 9 Nov 2005, 1:48:47 UTC
Last modified: 9 Nov 2005, 2:21:57 UTC

There is only one other BOINC project I know of that has "open source" of the crunching code. And that's Seti.

I've followed the "why can't I get your source code" threads on all the various projects, and without an exception, I agree with the reasons for NOT releasing the source.

With Rosetta doing no redundancy checks, logic says that they should keep their source closed, IMHO, for a few reasons:

1) same as why the projects other than seti don't open their source - check out the "open source" threads on other projects and one or more of the reasons will probably appy here.

2) The project people here seems to say "we don't worry about a few bogus results", but there's another side of the coin that they seem to be ignoring. And this could really happen without any crosschecking of results.

It's the "credit" thing. Face it, credits are important. Even if it's to just keep an eye on your systems and making sure they increase (if not, there's a problem), or if it's because you want to be better in the stats, for yourself, your team, etc.! (The BOINC funding request from the NSF specifically call out how important the compitition is for the sucess of DC is, and one of the reasons UCB uses for moving to Boinc is to reduce or eliminate cheating.)

With open source and no validity checking, there's a chance of "inflated credits" being claimed with bogus results, and that impacts the user feedback of credits within Boinc. Kind of like saying "here's my credit card. You can use it, but don't look at the credit card number, and only change what I tell you that you can." Anyone with a kid would know that will only work for short time.

And without some validity checks, it's like giving your kid the above mentioned credit card and then never looking at the bill.

ID: 2675 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Paul D. Buck

Send message
Joined: 17 Sep 05
Posts: 815
Credit: 1,812,737
RAC: 0
Message 2702 - Posted: 9 Nov 2005, 11:40:27 UTC

Going back to the "lost" work ... if the work unit is not returned it CAN be reissued ... if it comes back client error ... it can be reissued. So, lost or stolen :) work can be reissued without a problem (assuming that the project does in fact do this).

With that being said. I agree with azwoody (a mind-shattering even in and of itself). Credit is important, very important. There have been any number of papers on it including one by Carl at CPDN and Dr. Anderson (lets see, which project?), and yet, NOW, I feel that it is being neglected ...

The FLOPS count is a step forward, but DOES NOT address the cheating aspects by itself. That was the reason that in my slightly more complex proposal I used "double blind" techniques to prevent forgery/fraud.

Unfortunately, I have not been well enough to pursue this yet. But, I think that there is still a problem, the only good news is that some of the things that I would need to have in place are being put into place (though I have to wait for the new science application to be fielded).

I guess from my perspective, I think I have a "good" system that meets almost all objections (i have not seen one that seriously challenges any of the concepts ...), and is mostly an extension of the upcoming FLOPS counting method. The one part that I do not yet know is how much of a "load" the FLOPS count adds to the computational burden (only way to tell will be to run with the counts in, then comment them out and run the program again without the counts).
ID: 2702 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mike Gelvin
Avatar

Send message
Joined: 7 Oct 05
Posts: 65
Credit: 10,612,039
RAC: 0
Message 2762 - Posted: 9 Nov 2005, 22:14:44 UTC - in response to Message 2660.  

I wonder why they would go open source. It's their source, and the methods used are already available. Through Robetta and other means.


I believe that there are bugs in the Rosetta code that they dont have the resources to find. They will be asking for help, and hence, the open source.

ID: 2762 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
EclipseHA

Send message
Joined: 3 Nov 05
Posts: 12
Credit: 284,797
RAC: 0
Message 2768 - Posted: 10 Nov 2005, 2:29:59 UTC - in response to Message 2762.  

I wonder why they would go open source. It's their source, and the methods used are already available. Through Robetta and other means.


I believe that there are bugs in the Rosetta code that they dont have the resources to find. They will be asking for help, and hence, the open source.



Sorry, but this is not a valid reason. It's like giving your Credit card to 100 people and telling them to see if the card readers work around town, and to trust them not to charge anything without telling you.

Open source is not the way to get help like this.

What's needed is a request for volenteers that will help in tracking down the bugs and be given source but will NOT release the source or modified crunchers to others. That way, a fix can be found and merged into the "real" cruncher, without an unknown number of versions, with unknown changes, floating around.

Based on the platform/compiler/language requirements, I'd be glad to spend some time with the code fixing bugs under this "NDA" type release of code. I've been a DOS/Windows and unix/Linux programmer for over 25 years, primarily in C, and the project people can contact me if they'd like, and by way of my profile, have my email address.
ID: 2768 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile nasher

Send message
Joined: 5 Nov 05
Posts: 98
Credit: 618,288
RAC: 0
Message 2954 - Posted: 12 Nov 2005, 8:12:00 UTC

unfortuinatly if there is a way to cheat people will


as for the redundancy and such and not haveing the computing power to do this just wait alot of people from FaD are on there way.. i agree that i hate seeing redundancy in the high range but i always though a small redundancy (less than 2) was always nice to see... for instance instead of sending ALL work units out 2 or 3 or 5 ect times just send say 5% that were calculated last week at random back out as a check.. its a real low redundancy and dosent use 50% of the people... then any of the redundant work units if they come back difrent this can be checked to see if there is a corupt copy out there or if a person is cheating some way.

I curently have 1 machine spending some of its time crunching here and i will probaly start sending others over here this weekend. Yea i know alot of FaD people are staying there till the last moment (dec 16th) but i figure its time to start lookin seriously to determin if this is the right project for me (IE can my computer still play the games i like to play without slowdown while crunching)


Nasher
ID: 2954 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile UBT - Halifax--lad
Avatar

Send message
Joined: 17 Sep 05
Posts: 157
Credit: 2,687
RAC: 0
Message 2962 - Posted: 12 Nov 2005, 11:54:50 UTC - in response to Message 2954.  

but i figure its time to start lookin seriously to determin if this is the right project for me (IE can my computer still play the games i like to play without slowdown while crunching)
Nasher


You shouldnt see much slow fown of your computer BOINC is designed to run in the background and not affect any other running programs I know there is a lot of gamers using BOINC and they have said they dont notice any diffrence to their game play, the WU's may just take slightly longer to process than normal that is all
Join us in Chat (see the forum) Click the Sig


Join UBT
ID: 2962 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Is it time to verify Results



©2024 University of Washington
https://www.bakerlab.org