Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

caching optimization ... #3

Open
linas opened this issue Aug 20, 2020 · 8 comments
Open

caching optimization ... #3

linas opened this issue Aug 20, 2020 · 8 comments

Comments

@linas
Copy link

linas commented Aug 20, 2020

I may sound like a broken record, but .... I think there are a few small things you can do o improve performance, via caching...
So, here:

Handle AtomSpaceManager::executePattern(const std::string &id, const std::string &pattern) const {

change to (const std::string &id, const std::string &pattern, const std::string &key)

and here:

Handle result;
if (h->is_executable()) {
ValuePtr pattResult = h->execute(atomspace.get());
result = std::dynamic_pointer_cast<Atom>(pattResult);
return result;
} // not a pattern matching query

change to

    // Is there a cached value? If so return it.
    ValuePtr pattResult = h->getValue(hkey);
    if (nullptr != pattResult) return HandleCast(pattResult);

    // Oh no! we have to actually do the work of searching!
    pattResult = h->execute(atomspace.get());
    h->setValue(hkey, pattResult);
   return HandleCast(pattResult);

The search is done only once; after that, the previous cached value is returned "instantlly".

I'm recommending this API because its "compatible" with the previously-mentioned cog-execute-cached! API. (which BTW is not etched in stone, yet, its still an experimental API)

@Habush
Copy link
Owner

Habush commented Aug 20, 2020

Thank you for looking into the code, Linas. About the suggestion to add caching, wouldn't it better to use cog-execute-cached on the client side instead of adding the caching to the server side? What if another client sends a different query with the same key? We will be returning a wrong result.

@linas
Copy link
Author

linas commented Aug 20, 2020

different query with the same key?

The keybase is per-atom. You can use the same key for all atoms.

server side

Well, part of the point is that you can pre-query 1001 typical, common queries, and then any client who does one of these typical common queries gets an instant answer.

And if you pre-query 1001 typical, common queries, and save them in rocksdb, then next time you reboot the server, you don't even have to pre-mine again,. They are still there.

@Habush
Copy link
Owner

Habush commented Aug 20, 2020

I see. In that case I will make the change you suggested

@linas
Copy link
Author

linas commented Aug 20, 2020

Thank you!

@Habush
Copy link
Owner

Habush commented Aug 25, 2020

From the example you gave above, the key is a handle instead of a string. So I should instantiate a handle from the input key string before using it to get previous values. But what is the type of this key handle(Atom)? I see in this example the type of the key is a PredicateNode. Does this mean key handles should have type PredicateNode or the choice is arbitrary?

@linas
Copy link
Author

linas commented Aug 25, 2020

The keys must always be Atoms. They can be any Atom, but if you just want something simple e.g. "a string", then use PredicateNode. The values can be any Value.

@linas
Copy link
Author

linas commented Aug 25, 2020

If you do use something simple, and you use the same key everywhere, then the naming convention is to surround it with asterisk-dashes: e.g (PredicateNode "*-MyWhizBangKey-*") or (PredicateNode "*-my-whiz-bang-key-*"). These work as "eyecatchers" and make it easier to spot in the code/in the data.

@linas
Copy link
Author

linas commented Aug 25, 2020

If you want to use fast-file-load i.e. NOT use guile when loading from files, then there are a dozen hard-coded string commands you can put in your file. They are in Commands.h, see /opencog/persist/sexpr/Commands.h for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants