Skip to content

Commit

Permalink
minor changes
Browse files Browse the repository at this point in the history
  • Loading branch information
abhishek-ghose committed Sep 28, 2024
1 parent b581b03 commit 9179f5d
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 0 deletions.
Binary file added assets/active_learning/sampling_bias_errors.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
10 changes: 10 additions & 0 deletions inactive_learning.html
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,16 @@ <h2 id="here-be-dragons">Here be Dragons</h2>

<p>The greatest uncertainty is right in the middle of P and Q, since this represents the classifier’s boundary. In the next iteration, we sample points around C. The slim chunks of points around C match what the classifier has already seen, and it cannot know (with this sampling strategy) that there is a small red chunk, with width \(5\%\) a little further out to its left. So the classification boundary doesn’t change, and C’s uncertainty view is reinforced in future iterations, leading to cascaded sampling bias. Classifier C has an error of \(5\%\).</p>

<!-- _includes/image.html -->
<div class="image-wrapper">

<img src="/assets/active_learning/sampling_bias_errors.png" alt="test" />


<p class="image-caption">Error regions for each classifier.</p>

</div>

<p>Now, this is a contrived example, and you can argue you can come up with a better heuristic to deal with this specific case (these are precisely the kind of problems AL research tries to address)- but the larger point is that there is going to be this pesky issue of dealing with unknowns: the dataset, classifier, representation, etc.</p>

<p>There is obviously a lot to say on this topic, but I’ll try to summarize my thoughts below:</p>
Expand Down

0 comments on commit 9179f5d

Please sign in to comment.