Open-source GIB An idea I've been toying with
#21
Posted 2012-March-21, 07:09
#22
Posted 2012-March-21, 15:48
#23
Posted 2012-March-25, 02:38
cloa513, on 2012-March-21, 07:01, said:
Do you really think it's feasible to define meanings for every possible sequence? There are millions of combinations, particularly when competitive auctions are involved.
Humans don't have this problem because we learn basic principles and then use logic and creativity to apply them in different situations. But that's not how GIB works, it needs precise rules for everything. And the language and structure in which the GIB bidding rules are written makes it very difficult to tell whether all the cases are covered.
Even the simulations depend on rules. When it's deciding which bids to include in the simulation, it looks for rules that match hands similar to the one it has. And it has to know what the simulated bids show so it can determine how partner will respond to each of them.
Despite the fact that the "I" in GIB stands for "Intelligent", it isn't really. Like many forms of AI, it just fakes it. It's like Watson, the computer that played Jeopardy: it's really just a very fancy search engine, looking for entries in its database that match the most keywords in the clue.
Competent bridge bidding is an incredibly difficult task, making use of some of the features of the human mind that are unique to our species: language, complex planning, and empathy. Teaching a computer to do this is the holy grail of AI.
#24
Posted 2012-March-25, 03:39
barmar, on 2012-March-25, 02:38, said:
Humans don't have this problem because we learn basic principles and then use logic and creativity to apply them in different situations. But that's not how GIB works, it needs precise rules for everything. And the language and structure in which the GIB bidding rules are written makes it very difficult to tell whether all the cases are covered.
Even the simulations depend on rules. When it's deciding which bids to include in the simulation, it looks for rules that match hands similar to the one it has. And it has to know what the simulated bids show so it can determine how partner will respond to each of them.
Despite the fact that the "I" in GIB stands for "Intelligent", it isn't really. Like many forms of AI, it just fakes it. It's like Watson, the computer that played Jeopardy: it's really just a very fancy search engine, looking for entries in its database that match the most keywords in the clue.
Competent bridge bidding is an incredibly difficult task, making use of some of the features of the human mind that are unique to our species: language, complex planning, and empathy. Teaching a computer to do this is the holy grail of AI.
I don't say that at all. You should try reading what I write. I said limit the number of sequences and then make sure all those are well defined and it doesn't have to be a complete chain at some point with enough information GIB can just simulate the best place for the contract or partner signs off in the right place.
A lot of problems with GIB is because it is creative in a bad way- those simulations used instead of bidding tables (noone expects amazing bidding from a computer program).
Good human player do simulations in their head so there is really no difference.
#25
Posted 2012-March-26, 23:31
#26
Posted 2012-March-27, 00:59
barmar, on 2012-March-26, 23:31, said:
People would like simulations never used in low-level rounds of auctions if there is a clear bidding rule use that not a simulation.
#27
Posted 2012-March-27, 16:16
cloa513, on 2012-March-27, 00:59, said:
That's essentially how it work. Each bidding rule includes a flag stating whether simulations are allowed, required, or prohibited. Most of the low-level rounds have clear bidding rules with simulations prohibited.
#28
Posted 2012-March-27, 16:59
barmar, on 2012-March-27, 16:16, said:
#29
Posted 2012-March-27, 22:34
xxhong, on 2012-March-27, 16:59, said:
Probably not. Whenever I mention it to Fred, he says that high level auctions, like slam decisions and whether to double 5-level bids, cannot generally be handled with rules. They require judgement, and GIB uses simulations in place of judgement.
In general, any auction that would cause a human expert to go into the tank cannot be programmed with rules.
#30
Posted 2012-March-28, 01:57
I suppose one could make a hand-evaluation formula which explicitly devaluates honors opposite partner's singleton. But how to upgrade honors in the minor suit partner has opened? It is not guaranteed that partner has length/strength in the suit, but your own holding and/or opps bidding may make it likely.
And how to assess whether a single stopper in opps' suit is enough for a 3NT bid? I can't imagine making rules for that.
That said, GIB sometimes suffers from statistical flukes in short simulation series, and I have wondered if some baysian approach might work: make rules that provide a prior for the number of total points for each alternative action, and then use simulations to update the prior.
#31
Posted 2012-March-28, 09:27
The more the auction can branch out the less valuable simulations become. For example, it doesn't make any sense to me to simulate in order to decide between making a 1-level overcall and making a takeout double - it is not realistic to try to predict "what will happen next" with any accuracy even if the sample size is relatively large. Rules tend to be much more effective than simulations in such areas and of course they also produce more consistent actions from GIB (which I suspect is important to many users).
Handling freak hands via rules can be problematic - a significant number of reports of stupid bidding by GIB concern strange hands that are not covered properly by existing rules. Fortunately such hands are relatively rare and we tend to patch up the relevant rule(s) when a freak hand that slips through the cracks is reported. That being said, "rare" is relative. Hundreds of thousands of hands involving GIB are played every day on BBO.
Fred Gitelman
Bridge Base Inc.
www.bridgebase.com
#32
Posted 2012-March-28, 16:40
Also, I am not only talking about slam biddings. IMO, at least most 3 level biddings should have accurate meanings, All the bids shouldn't contradict each other. It's never a matter of simulation. There are still many bids that lack of accurate definitions. For example: If system says:
1H 1S 2D 3C 3D shows 5+ diamonds, it would be absurd to bid 3D with 4 diamonds just based on a small sample sized simulation. All the cuebids should also have accurate meanings that provides important constraints to slam decisions. Those constraints would lead to successful trick counting. Simulation is so bad in so many ways. It doesn't know bridge is a single dummy game. Therefore if you have AJx vs KTx, it thinks that you have no losers and trick counting tells you that you have 0.5 losers. Also, how seriously you take opps' bidding into account is a difficult AI problem. If you take wrong constraints from opps, you can never achieve the correct bidding or playing results. Many times, I see opps' misbid leads gib to bad finesses in 100% contracts and go down. All such things can be avoided by easy trick counting or totally ignoring opps' bidding in 100% successful situations.
barmar, on 2012-March-27, 22:34, said:
In general, any auction that would cause a human expert to go into the tank cannot be programmed with rules.
#33
Posted 2012-March-29, 03:20
helene_t, on 2012-March-28, 01:57, said:
I suppose one could make a hand-evaluation formula which explicitly devaluates honors opposite partner's singleton. But how to upgrade honors in the minor suit partner has opened? It is not guaranteed that partner has length/strength in the suit, but your own holding and/or opps bidding may make it likely.
And how to assess whether a single stopper in opps' suit is enough for a 3NT bid? I can't imagine making rules for that.
That said, GIB sometimes suffers from statistical flukes in short simulation series, and I have wondered if some baysian approach might work: make rules that provide a prior for the number of total points for each alternative action, and then use simulations to update the prior.
You provide a good argument for improving GIB's calculation of Total Points rather than simulations- if you followed that argument you be saying think about what you say not actually what you think I wrong.
TP which should be dynamically calculated as partner's hand and opponents` hands are bid - the fixed standard is awful.
#34
Posted 2012-March-29, 11:27
xxhong, on 2012-March-28, 16:40, said:
Saying this over and over doesn't make it true. I think about most of my slam bidding (with robots and humans), and simple trick counting is rarely how I do it. That only works when you have a running suit.
Most of my slam bidding is about trying to figure out how many controls and losers we have. And there's often lots of estimating (or guessing) about whether partner is likely to have a relevant side queen -- unless you have sophisticated agreements, these are hard to pinpoint. Basically, it's about how well the two hands seem to fit together.
#35
Posted 2012-March-29, 12:39
Now gib's bidding is so rough. The other day, I saw gib bid 4S with AKJxxx xxxxx xx - after p(gib) p 1D p 1S 2C(opp) 3S p and we missed a cold 7S (details could be slightly off, but the hand is real). Does the simulation help you? Also, it is a long know bug that gib doesn't really know how to proceed after a partner's splinter bid and often overbid or underbid. All these problems point to bad simulations are really the keys to prevent gib from sound bidding. Actually simulation is very powerful if you can supply good and intelligent constraints set up; large sample size; and careful analysis to avoid double dummy over optimistic evaluations. For now, I still see none of them have been done.
Now, gib's evaluation tool is very limited. It doesn't have a sound loser count scheme to decide how high to bid with distributions. It doesn't have a sound trick counting scheme. All these stuffs should be carefully implemented to improve the performance, because human experts apply all these kind of evaluation techniques in almost every hand. Also, human experts do construct hands and simulate in difficult problems. After years, I still see none of them have been carefully implemented in GIB's code. Also, many bids are badly defined (or not defined) after 3 rounds of biddings. At least, such kind of improvement should be encouraged, not prohibited if BBO is serious about improving gib's bidding performance. It couldn't even make a penalty double against slam contracts with sure defensive tricks in many situations or fails to cash them after making a penalty double. The roots of all these problems are badly designed simulations and naive hand evaluation tools.
barmar, on 2012-March-29, 11:27, said:
Most of my slam bidding is about trying to figure out how many controls and losers we have. And there's often lots of estimating (or guessing) about whether partner is likely to have a relevant side queen -- unless you have sophisticated agreements, these are hard to pinpoint. Basically, it's about how well the two hands seem to fit together.
#36
Posted 2012-March-29, 13:41
Unlike cloa513, you obviously you have some knowledge in terms of both bridge and software. While I appreciate your willingness to report problems and to offer suggestions for improvements (some of which are quite sensible), an attitude adjustment would be appreciated. If you are incapable of that, then please stop commenting on these matters - the style in which you tend to post is not constructive.
We have some very nice and highly-skilled people who are working hard on trying to improve GIB. I am sorry that you do not seem to be satisfied with the progress they are making, but the manner is which you (and a handful of others) criticize their efforts can only serve to demoralize them.
As I am sure you understand, programming a computer to play expert-level bridge is an extremely difficult task (despite the impression you sometimes give that there is not much to it). Please show some respect for the fine people who we have assigned to that task. If you cannot do that then please go away.
And if you really think that you are so much smarter about this than we are (or even if you don't), please feel free to try to write your own bidding program. If you can come up with something that is significantly better than GIB, you might find that we are willing to make you a generous offer.
Fred Gitelman
Bridge Base Inc.
www.bridgebase.com
#37
Posted 2012-March-29, 15:31
fred, on 2012-March-29, 13:41, said:
Unlike cloa513, you obviously you have some knowledge in terms of both bridge and software. While I appreciate your willingness to report problems and to offer suggestions for improvements (some of which are quite sensible), an attitude adjustment would be appreciated. If you are incapable of that, then please stop commenting on these matters - the style in which you tend to post is not constructive.
We have some very nice and highly-skilled people who are working hard on trying to improve GIB. I am sorry that you do not seem to be satisfied with the progress they are making, but the manner is which you (and a handful of others) criticize their efforts can only serve to demoralize them.
As I am sure you understand, programming a computer to play expert-level bridge is an extremely difficult task (despite the impression you sometimes give that there is not much to it). Please show some respect for the fine people who we have assigned to that task. If you cannot do that then please go away.
And if you really think that you are so much smarter about this than we are (or even if you don't), please feel free to try to write your own bidding program. If you can come up with something that is significantly better than GIB, you might find that we are willing to make you a generous offer.
Fred Gitelman
Bridge Base Inc.
www.bridgebase.com
#38
Posted 2012-March-29, 17:24
#39
Posted 2012-March-29, 22:23
barmar, on 2012-March-29, 17:24, said:
Fair enough, but in "Upgrated GIB" you spoke of adding overriding pragmatic rules to take care of situations like cashing out and where GIB seems to lose its way. Has this also been ruled out?
#40
Posted 2012-March-30, 23:02
fred, on 2012-March-29, 13:41, said:
Unlike cloa513, you obviously you have some knowledge in terms of both bridge and software. While I appreciate your willingness to report problems and to offer suggestions for improvements (some of which are quite sensible), an attitude adjustment would be appreciated. If you are incapable of that, then please stop commenting on these matters - the style in which you tend to post is not constructive.
We have some very nice and highly-skilled people who are working hard on trying to improve GIB. I am sorry that you do not seem to be satisfied with the progress they are making, but the manner is which you (and a handful of others) criticize their efforts can only serve to demoralize them.
As I am sure you understand, programming a computer to play expert-level bridge is an extremely difficult task (despite the impression you sometimes give that there is not much to it). Please show some respect for the fine people who we have assigned to that task. If you cannot do that then please go away.
And if you really think that you are so much smarter about this than we are (or even if you don't), please feel free to try to write your own bidding program. If you can come up with something that is significantly better than GIB, you might find that we are willing to make you a generous offer.
Fred Gitelman
Bridge Base Inc.
www.bridgebase.com
As long as we do not know exactly how GIB works you must expect us to speculate and often get it wrong. All suggestions can seem destructive but I think xxhong and cloa513 have made contributions which could be used to improve GIB. The same goes for antrax. One of cloa513's questions sticks in my mind and I think merits further investigation: "why do GIB's simulations not cause it to cash out when it has established enough winning tricks?"
Now I am biassed, I want to see a pragmatic reasoning AI become the world champion and purely selfishly I wish you had developed Base lll into Base 17!
Having defended the indefensible for most of my life, I do empathise with Barmar even though he does it much better than I used to.