-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathsearch.json
1472 lines (1472 loc) · 459 KB
/
search.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[
{
"objectID": "posts/survey_2022/index.html",
"href": "posts/survey_2022/index.html",
"title": "HoloViz Survey Results",
"section": "",
"text": "Welcome! HoloViz offers a suite of open-source tools for comprehensive data visualization in a web browser, including high-level interfaces for quick construction and precise control to facilitate both exploratory analysis and complex dashboard building.\nWe recently conducted our first-ever user survey, and thanks to the 130+ respondents, we gained valuable insights into how users interact with HoloViz tools and where our documentation could be improved. Since then, we’ve been busy implementing your suggestions; we’ve initiated a documentation structure revamp, provided a path to fully interactive documentation, and taken steps to enhance our community, including hosting documentation sprints, establishing a formal governance structure, joining NumFOCUS, launching a Discord server, and conducting a data app contest.\nOur future plans focus on two main areas:\nWe’re excited to share this summary of the survey results, our progress, and future plans."
},
{
"objectID": "posts/survey_2022/index.html#contents",
"href": "posts/survey_2022/index.html#contents",
"title": "HoloViz Survey Results",
"section": "Contents:",
"text": "Contents:\n\nSelect Survey Results\n\nUsers and their usage of HoloViz\n\nUser field and role\nDuration of HoloViz use\nFirst vs. most used HoloViz library\nhvPlot vs. HoloViews\nHoloViz dev environment\nCommon data and other packages\nSharing your work\nType hints\n\nAbout HoloViz docs\n\nDocs rating\nOverall docs priorities\nPackage-specific docs type priorities\nPackage-specific docs topic priorities\nHoloViz tutorial\n\n\nActions taken and planned in response to the survey\n\nProgress and achievements\n\nDocumentation structure revamp\nIn-Browser interactive examples\nCommunity building\n\nFuture plans\n\nEnhancing reference materials\nAssisting with package selection\n\n\nClosing"
},
{
"objectID": "posts/survey_2022/index.html#select-survey-results",
"href": "posts/survey_2022/index.html#select-survey-results",
"title": "HoloViz Survey Results",
"section": "Select Survey Results",
"text": "Select Survey Results\n\nUsers and their usage of HoloViz\nHere are some of the key highlights that we learned about a slice of our user community.\n\nUser field and role\nWe asked: “What field do you work in?” and “What title best characterizes your role when using HoloViz tools?”.\nHoloViz tools are clearly used across a wide range of domains, including academia, industry, public, and private sectors. The diversity of applications shows the power and adaptability of HoloViz to support data visualization and analysis needs across many areas of work and study. We hope to further broaden its utility, as cross-pollination of ideas and use cases across fields serves to strengthen our ecosystem and drive open-source innovations.\n\n\nQuick note:\n\nMany of you wrote custom responses for several of the questions. We have reviewed and incorporated them into our summaries and responses, but have either collapsed them into ‘other’ or omitted them in these summary plots so that the displays don’t blow up from a flood of unique categories.\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nAdditionally, approximately 60% of our respondents are “scientists” specializing in either data, research, or applied fields, while another significant portion comprises “engineers” from various domains such as software and machine learning. Although the number of student responses in this survey was limited, we recognize the importance of actively engaging this demographic and will try to improve their turnout in the future.\n\n\n\n\n\n\n \n\n\n\n\n\n\nDuration of HoloViz use\nWe asked: “For how long have you used HoloViz tools?”.\nWe were hoping to have a balance of responses from both our experienced users and newer users with fresh perspectives. Luckily, respondents were roughly split half and half on whether they have used HoloViz tools for over a year. The range of experience levels provides a valuable mix of feedback on both the cultivated expertise that comes with long-term use, as well as opportunities to improve the user experience for those just beginning their journey with HoloViz.\n\n\n\n\n\n\n \n\n\n\n\n\n\nFirst vs. most used HoloViz library\nWe asked: “What was the first HoloViz Tool that you used?” and “Which specific HoloViz package have you used the most?”.\nMany of you started your HoloViz journey working with HoloViews (one of the original HoloViz packages), but are now Panel aficionados. Panel has been surging in popularity, with people from very different backgrounds now creating cool web apps. We are thinking that Panel and hvPlot are probably the appropriate entry points into HoloViz for new users looking to either do dashboarding or data exploration, respectively (more on this thought later).\n\n\n\n\n\n\n \n\n\n\n\n\n\nhvPlot vs. HoloViews\nWe asked: “If you have used both hvPlot and HoloViews, which do you prefer for data exploration?”.\nBelow, we separate the results into respondents that have either used hvPlot or HoloViews the most, as asked in a prior question. Among users who mostly use hvPlot (left), a significant majority of about 92% expressed a preference for hvPlot for data exploration, indicating a strong correlation between usage and preference. Interestingly, among users who mostly use HoloViews (right), the majority (about 63%) still preferred HoloViews, but the margin was narrower. This could suggest that while users tend to prefer the tool they use most often, hvPlot has a notable appeal even among those who primarily use HoloViews.”\n\n\n\n\n\n\n \n\n\n\n\n\n\nHoloViz dev environment\nWe asked: “What notebook environment do you use when working with HoloViz tools?” and “Where do you write Python scripts when working with HoloViz tools?”.\nUnderstanding the environments in which our users operate is crucial for optimizing the HoloViz toolset. So, we sought to identify the most common notebook and Python scripting environments among our user base. Jupyter Lab emerged as the favored notebook environment, used by 66% of respondents, suggesting its capabilities align well with HoloViz’s strengths. Meanwhile, over 62% of respondents prefer VS Code for scripting, likely reflecting its robust Python development support.\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nCommon data and other packages\nWe asked: “What data library/structure do you commonly use with HoloViz?” and “What other packages do you use alongside HoloViz tools?”.\nUnsurprisingly, Pandas and NumPy are the most commonly used data libraries with HoloViz, reflecting their foundational role in data science. However, Xarray also shows substantial usage, underscoring its relevance for multi-dimensional array operations. The wide range of other packages used alongside HoloViz, including Matplotlib, Plotly, and scikit-learn, illustrates the versatility of HoloViz tools and their integration within diverse workflows. These insights help us enhance HoloViz’s compatibility with popular libraries and tools, optimizing user experience.\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nSharing your work\nWe asked: “How do you share your HoloViz work with others?” and “If you share live, running Python apps, what frameworks and platforms do you use?”.\nUnderstanding how users disseminate their HoloViz work is key to enhancing its collaborative capabilities. Most users tend to share their entire notebooks, highlighting the notebook’s value as a comprehensive record of data analysis that combines code, visualizations, and narrative. Exported HTML and internally hosted apps also emerged as common sharing methods, reflecting the need for static and interactive data presentation formats respectively.\n\n\n\n\n\n\n \n\n\n\n\nWhen it comes to sharing live, running Python apps, Flask emerged as the top choice, likely due to its simplicity and flexibility for web app development. However, a diverse range of other platforms like Amazon Web Services, SSH, and Nginx are also employed, indicating the varied requirements of our users in terms of hosting and deployment. These insights inform our efforts to ensure HoloViz tools are compatible and easy to use across various sharing and deployment platforms.\n\n\n\n\n\n\n \n\n\n\n\n\n\nType hints\nWe asked: “When using Python packages, do you benefit if a package has used type hints and declarations in their code?”.\nAs we transition from understanding how users interact with and disseminate their HoloViz work, we also sought insights into what enhances their experience with Python packages more broadly, especially relating to different forms of documentation. A clear majority, over 70%, affirmed that type hints in a package’s code are beneficial. This feature, which aids in understanding the expected input and output types of functions, can increase code readability, assist in debugging, and improve IDE tooling support. The response from our users highlights the value of this coding practice, and as a result, we will are discussing how best to add type hints into our code base.\n\n\n\n\n\n\n \n\n\n\n\n\n\n\nAbout HoloViz docs\nWe asked: “What development activities would help you most right now?”.\nOne of the primary purposes of this first survey was to help prioritize much-needed updates to our documentation, with a particular focus on improving the new user experience. And clearly, you agree that documentation is the highest priority:\n\n\n\n\n\n\n \n\n\n\n\n\nDocs rating\nWe asked: “How was your documentation experience when you were a new HoloViz user?”.\nUsers rated their initial documentation experience with HoloViz on a scale of 1 to 5. The majority had a neutral (35%) or slightly negative (29%) experience, with a quarter of respondents reporting a positive experience (25%). Only a small percentage found their experience to be excellent or unsatisfactory. These results highlight areas for improvement in our documentation to ensure a smoother onboarding experience for new HoloViz users.\n\n\n\n\n\n\n \n\n\n\n\n\n\nOverall docs priorities\nOne of the most important docs questions that we asked was overall “What potential changes to our documentation do you think would most improve the new user experience?”. For simplicity, below are the results for the pre-defined answers (although there were many write-ins that we are taking action on).\nThe most favored suggestion, with nearly 59% support, was to improve the reference API material with examples, signifying the need for clear, actionable examples in API documentation. About half of the respondents also sought examples of when and how to switch between different HoloViz tools or a guide on choosing the most appropriate package to work with, indicating a demand for more guidance on using the right tool for a particular task.\n\n\n\n\n\n\n \n\n\n\n\n\n\nPackage-specific docs type priorities\nAbout each respondent’s most used package, we asked “For [package], rank the type of documentation that we should focus on improving to help you most right now?”. Below are the results for the three most popular HoloViz packages.\nThe results indicate diverse needs across these packages. For Panel and hvPlot, ‘How-to’ recipes for specific tasks emerged as a priority (see Progress and achievements). These practical guides can help users navigate specific use-cases and tasks, reinforcing understanding through application. This suggests that users are seeking more actionable guidance on using these packages to address specific challenges or scenarios.\nOn the other hand, HoloViews users found ‘Explanation of concepts and design’ to be the most beneficial. This implies that users find the conceptual underpinnings and design principles of HoloViews critical for the effective use of the package. As we revamp our documentation, these user priorities will guide our focus, ensuring we deliver information that is both useful and relevant to our users.\n\n\n\n\n\n\n \n\n\n\n\n\n\nPackage-specific docs topic priorities\nIn addition to understanding the types of documentation our users find most helpful, we were also interested in identifying specific topics within those documentation types where users saw room for improvement. To this end, we again segmented respondents based on their most-used package - Panel, hvPlot, or HoloViews - and asked: “For the package that you selected, improvement to what documentation topics would you most benefit from?”.\nPanel users expressed a need for better documentation on app responsivity and building custom components. These topics are central to creating and managing effective Panel applications, and users’ responses indicate the need for clearer or more comprehensive guidance in these areas.\n\n\n\n\n\n\n \n\n\n\n\nIn contrast, hvPlot and HoloViews users were focused on different topics. A clear need for more guidance on interactivity emerged, suggesting users are keen to leverage the interactive capabilities of these packages but may find the current documentation lacking. In addition, users expressed a desire for better integration with other HoloViz packages, underscoring the importance of cohesive, cross-package documentation. The request for improved guidance on applying customizations points to users’ desire for more personalized, adaptable visualizations.\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nHoloViz tutorial\nWe asked: “Have you checked out the tutorial on HoloViz.org? If so, in what forms have you experienced it?”.\nWhen asked about their engagement with the HoloViz tutorial, most users reported reading it directly on the website, signifying the convenience and immediacy of this method. However, many also downloaded the tutorial notebooks for a more hands-on, interactive learning experience. Guided talks through the tutorial were another popular choice, underscoring the importance of providing diverse learning formats.\nDespite these varied approaches, fewer users utilized cloud infrastructure for tutorial access, suggesting that this option might need more visibility or user-friendly features. A small fraction were unaware of the tutorial, highlighting an opportunity to improve communication and resource visibility."
},
{
"objectID": "posts/survey_2022/index.html#actions-taken-and-planned-in-response-to-the-survey",
"href": "posts/survey_2022/index.html#actions-taken-and-planned-in-response-to-the-survey",
"title": "HoloViz Survey Results",
"section": "Actions taken and planned in response to the survey",
"text": "Actions taken and planned in response to the survey\n\nProgress and achievements\n\nDocumentation structure revamp\nA key point of feedback has been the lack of a user-centric structure in our documentation. Historically, our docs have leaned heavily on comprehensive user guides for each topic, making it challenging for users to locate necessary information and for contributors to identify and fill gaps. In response, we’ve initiated a transition, beginning with Panel’s documentation, to adopt a structure inspired by the Diataxis framework. Diataxis prioritizes understanding user needs and organizes documentation around distinct types such as how-to guides, references, and explanations. Our progress so far includes migrating user guides to how-to guides, adding an Explanation section, and overhauling the App Gallery:\n\nMigrate user guide to how-to guides (#4244, #4251, #4267, #4290, #4412, #4422, #4759, #4774)\nAdd Explanation section (#2797, #3168, #4664)\nOverhaul App Gallery (#4047, #4565, #4574, #4598, #4683)\n\n\n\nIn-Browser interactive examples\nThe survey highlighted a desire for fully interactive documentation. We’ve taken strides to ensure that most of Panel’s documentation can be interactively executed directly in the web browser. This has already elicited positive feedback, with other software teams showing interest in adopting this approach:\n\nUse pyodide rendering throughout documentation and add JupyterLite links (#4751)\n\n\n\nCommunity building\nAligned with survey feedback for further creation and fostering of the HoloViz community, we’ve taken tangible steps to enhance our community. Here’s what we’ve accomplished since the survey:\n\nHosted sprints at PyData and EuroPython, focusing on documentation.\nEstablished a formal governance structure for HoloViz, including a code of conduct and a steering committee.\nObtained fiscal sponsorship from NumFOCUS, further aligning HoloViz with a broader community of open-source projects and opening opportunities for increased collaboration.\nShifted our user and contributor chats to Discord, enhancing community interaction and transparency.\nConducted a Panel app contest offering substantial prizes.\n\n\n\n\nFuture plans\n\nEnhancing reference materials\nImproving the reference material is clearly important to users regardless of experience level. Many of you wrote in additional comments related to this theme. For instance: - “It would be great if mentions of various functions, etc in the examples were hyperlinked to an API reference (and the API reference had examples).” - “extensive description of all supported parameters and expected parameter options” - “…Stories are great, but without API docs, I can’t figure out what Holoviews is actually doing.” - “[for hvPlot] since there isn’t a searchable API reference, it’s difficult to figure out how to use them at all or if I should be trying to use them.” - “…links to reference pages [from other parts of the documentation]” - “hv.help() should tell you [what to check in the external plotting library documentation]” - “…make more complete docstrings. It’s usually tough to understand what options are available.” - “…clear overview of all the possible settings…” - “Add type hints and stubs to allow static check and autocomplete. That is the most lacking feature of param.”\n\nPlanned actions for reference materials:\nWe aim to significantly improve our reference materials by: - Creating API reference pages for all HoloViz libraries - Including links from elsewhere in the docs to the reference material - Enhancing hv.help() with better outputs, sorting, and parameter support: - Add output to reference pages (e.g. HoloViews #5423) - Show the docstring (e.g. HoloViews #5421, #4076) - Sort the output alphabetically (e.g. HoloViews #5420) - Clarify the distinction of different tools args (e.g. HoloViews #4636, #5231) - Ensure the parameters are supported (e.g. HoloViews #2887) - Align and organize reference guides (see proposal on Panel #4305) - Standardize and complete docstrings: - Document input data format (e.g. HoloViews #3632, #2925, #2116) - Write consistent docstrings (e.g. HoloViews #2322, nbsite #67 - Add type hints to code - Create a friendly display for Param.Parameterized objects (e.g. Param #425) - Fuzzy matching of not-yet-supported backend-specific options (e.g. HoloViews #4463\n\n\n\nAssisting with package selection\nThere’s notable confusion about choosing the right HoloViz package for a task. Respondents found it difficult to understand the boundaries and overlaps between the packages and to decide which was best for their application:\n\n“The other issue that I know I share with many of my colleagues is the confusion of the HoloViz package separations. For a beginner it is really hard to grasp where the boundaries are and what the individual package is doing in particular, especially because they can have all different sorts of backends (matplotlib, bokeh, pyplot) and seem to have some overlap (holoviews/geoviews).”\n“in general I found it difficult to easily decide which of the many packages was best for my application”\n“…I always struggle with the many different options of doing something…”\n“The number of subproject (panel, colorcet…) is somewhat confusing.”\n“Hard to learn … when to switch to holoviews from hvplot…”\n\n\nPlanned actions for package selection:\n\nGuide new users to HoloViz.org from our individual library sites to offer a comprehensive view of the ecosystem. Many of our standalone library sites (such as datashader.org) do not sufficiently highlight their part in the wider HoloViz ecosystem or suggest alternative packages that could be more suitable. We aim to address this by prominently signposting HoloViz.org on each library homepage, directing new users to a hub where they can receive guidance on selecting the most appropriate package.\nOverhaul the ‘Getting Started’ section on HoloViz.org to provide clearer guidance on package selection. We understand that our current resources may not adequately guide new users to the most suitable package for their use case. Our plan is to enhance the ‘Getting Started’ experience by recommending most users to begin with either hvPlot or Panel. These two packages collectively offer access to the majority of HoloViz tools and effectively cater to users either in data exploration mode (hvPlot) or app-building stage (Panel). We also aim to clarify the use-case boundaries between packages with overlapping functionality, like hvPlot and HoloViews, to alleviate confusion about initial package selection and subsequent transitioning between them.\nUnify the reactive API across HoloViz to simplify the creation of UI components and data pipelines. We recognize that the current diversity and inconsistency in our approach to building reactive UI components across different packages can complicate package selection and transition between methods. To address this, we’re working on unifying HoloViz’s reactive programming approach. This will make it more intuitive and straightforward to construct reactive data pipelines and UI components across the ecosystem, thereby clarifying the appropriate tool selection for specific workflows. Follow the discussion on Holoviz #370."
},
{
"objectID": "posts/survey_2022/index.html#closing",
"href": "posts/survey_2022/index.html#closing",
"title": "HoloViz Survey Results",
"section": "Closing",
"text": "Closing\nAs an open-source project, HoloViz thrives on contributions from our diverse community of users. Our goal is not just to develop powerful data visualization tools, but also to build a strong, active community of contributors. If any of the future plans resonate with you, we encourage you to get involved. You can reach out in our #new-contributors channel on Discord or engage in a relevant issue on GitHub. Many of the improvements we’ve made so far have been thanks to contributions from both new and existing community members, who we always acknowledge in our release posts - as can be seen on the Announcements on our Discourse forum. We look forward to your participation in shaping the future of HoloViz.\nIn closing, we want to express our profound gratitude to all who participated in our first-ever survey. Your time, feedback, and insights are invaluable in guiding our development and refining our focus. Thank you for your continued support and engagement with HoloViz."
},
{
"objectID": "posts/pyviz_holoviz/index.html",
"href": "posts/pyviz_holoviz/index.html",
"title": "PyViz.org and HoloViz.org",
"section": "",
"text": "PyViz.org and HoloViz.org\n\nPyViz is a project originally started by Anaconda, Inc. and now including contributions from a very wide range of external contributors. The project brought together the authors of Datashader, HoloViews, GeoViews, Param, and Colorcet, with the goal of helping people make sense of the confusing Python data visualization landscape. As part of this project, we have added several additional libraries, including Panel and hvPlot.\nHowever, in practice there has been confusion between our work to help make viz more accessible for all Python users and our advocacy for our own particular libraries, approaches, and viewpoints.\nTo help everyone keep things straight, we have split these two goals and approaches into two separate organizations: PyViz and HoloViz.\nLike PyData.org (after which it was named), PyViz.org is an open, non-partisan site owned by NumFocus. PyViz is dedicated to sharing information about Python tools, without making claims or judgments about which tool is better. Anyone can contribute factual information to PyViz.org, in the hopes of educating everyone about what tools and capabilities are available in Python. Plus, any Python visualization tool can request a .pyviz.org domain name, which will redirect to their web site. Anaconda, Inc. currently pays for the server and administers PyViz.org, but as laid out in pyviz/website#2, future governance is open to anyone ready to promote Python data visualization in a balanced way.\nMeanwhile, HoloViz.org is an opinionated guide to the tools we created and how to use them to solve problems in data science. These tools were built around and on top of the many science and engineering tools already available in Python, focusing on adding higher-level interfaces that directly address problems faced by end users. HoloViz tools support flexibly visualizing data of any dimensionality in any combination, putting together dashboards quickly and conveniently, rendering billions of data points as easily as hundreds, maintaining visual representations separately from domain models, and effectively utilizing the full dynamic range available for visual perception.\nWe hope that separating our efforts in this way will help the community be able to use and support PyViz.org as a general resource for all things viz in Python, while still letting us present a strong case for our own approaches to viz on HoloViz.org.\nNow that this is all set up, we’d love feedback! If you spot any errors, omissions, or just improvements that can be made at PyViz.org, please open an issue or PR at https://github.com/pyviz/website. In particular, coverage of 3D/SciVis approaches and native-GUI tools is relatively light so far, and we’d welcome some updates from people experienced in those areas. Together we can cover a lot more ground than any one group alone can, and can help new Python users find just the right tool for their needs!\n– The HoloViz Team\n(James A. Bednar, Philipp Rudiger, Jean-Luc Stevens, Julia Signell, Chris Ball, and Jon Mease)\n\n\n\n Back to top"
},
{
"objectID": "posts/panel_release_1.5/index.html#what-is-panel",
"href": "posts/panel_release_1.5/index.html#what-is-panel",
"title": "Panel 1.5.0 Release",
"section": "What is Panel?",
"text": "What is Panel?\nPanel is an open-source Python library that allows you to easily create powerful tools, dashboards, and complex applications entirely in Python. With its “batteries-included” philosophy, Panel brings the full PyData ecosystem, advanced data tables, and much more to your fingertips. It offers both high-level reactive APIs and lower-level callback-based APIs, enabling you to quickly build exploratory applications or develop complex, multi-page apps with rich interactivity. As a member of the HoloViz ecosystem, Panel provides seamless integration with a suite of tools designed for data exploration."
},
{
"objectID": "posts/panel_release_1.5/index.html#new-release",
"href": "posts/panel_release_1.5/index.html#new-release",
"title": "Panel 1.5.0 Release",
"section": "New Release!",
"text": "New Release!\nWe are excited to announce the 1.5.0 release of Panel! While this is technically a minor release, it significantly expands the range of possibilities in Panel. Here’s a high-level overview of the most important features:\n\nEasily create new components: It is now trivially easy to build new JavaScript, React, or AnyWidget-based components with hot-reload, built-in compilation, and bundling. Likewise for Python based widgets, panes and layouts.\nNative FastAPI integration: We’ve added native support for running Panel apps on a FastAPI server.\nNew components: This release includes several new components, such as the Placeholder pane, FileDropper and TimePicker widgets, and the ChatStep component.\nImproved chat interface: We have greatly enhanced the ChatInterface user experience by improving its design and performance.\nPY.CAFE support: You can now run Panel apps in PY.CAFE.\nImproved contributor experience: We’ve made significant improvements to the contributor experience.\nNumerous enhancements: This release includes a large number of enhancements and bug fixes, particularly for the Tabulator component.\n\nWe greatly appreciate the contributions from 21 individuals to this release. We’d like to extend a warm welcome to our new contributors: @twobitunicorn, @justinwiley, @dwr-psandhu, @jordansamuels, @gandhis1, @jeffrey-hicks, @kdheepak, @sjdemartini, @alfredocarella, and @pmeier. We also want to acknowledge our returning contributors: @cdeil, @Coderambling, @jrycw, and @TBym. Finally, we give special recognition to our dedicated core contributors, including @Hoxbro, @MarcSkovMadsen, @ahuang11, @maximlt, @mattpap, @jbednar, and @philippjfr.\n\nIf you’re using Anaconda, you can install the latest version of Panel with conda install panel. If you prefer pip, use pip install panel."
},
{
"objectID": "posts/panel_release_1.5/index.html#create-new-components",
"href": "posts/panel_release_1.5/index.html#create-new-components",
"title": "Panel 1.5.0 Release",
"section": "Create New Components",
"text": "Create New Components\nPreviously, creating custom components in Panel often required building a Bokeh extension, which involved complex build tools to set up and distribute the compiled JavaScript (JS) bundle. Alternatively, you could write a ReactiveHTML component, but this process often resulted in a clunky developer experience.\nWith this release, we’re introducing a new set of component base classes that make it effortless to build components, wrap external JS and React libraries, and distribute these components as optimized bundles within your package or app.\n\nIntroducing ESM Components\nThe new base classes — JSComponent, ReactComponent, and AnyWidgetComponent — leverage ECMAScript modules (ESM) to simplify the process of building reusable components. ESM modules make it easier to import other libraries, thanks to import and export specifiers that allow developers to efficiently import functions, objects, and classes from other modules.\nTo declare a new component, simply define an ESM module, either inline or by providing a path to a .js(x) or .ts(x) file. The component will be compiled on the fly, with imports dynamically loaded from a Content Delivery Network (CDN).\n\nimport param\n\nfrom panel.custom import JSComponent\n\ncss = \"\"\"\nbutton {\n background-color: #4CAF50;\n color: white;\n border: none;\n padding: 12px 24px;\n font-size: 16px;\n border-radius: 8px;\n}\n\"\"\"\n\nclass ConfettiButton(JSComponent):\n\n clicks = param.Integer(default=0)\n\n _esm = \"\"\"\n import confetti from \"https://esm.sh/[email protected]\";\n \n export function render({ model }) {\n const button = document.createElement('button')\n button.addEventListener('click', () => { model.clicks += 1})\n const update = () => {\n confetti()\n button.innerText = `Clicked ${model.clicks} times`\n }\n model.on('clicks', update)\n update() \n return button\n }\"\"\"\n\n _stylesheets = [css]\n\nConfettiButton()\n\n\n\n\n\n \n\n\n\n\nBreaking this down we can see how easy it is to create a DOM element, attach event listeners and finally, react to and update parameter values.\n\n\nReact Integration\nWe can also implement this component as a ReactComponent making it trivially easy to build complex UIs:\n\nfrom panel.custom import ReactComponent\n\nclass ConfettiButton(ReactComponent):\n\n clicks = param.Integer(default=0)\n\n _esm = \"\"\"\n import confetti from \"https://esm.sh/[email protected]\";\n \n export function render({ model }) {\n const [clicks, setClicks] = model.useState('clicks')\n React.useEffect(() => { confetti() }, [clicks])\n return ( \n <button onClick={() => setClicks(clicks+1)}>\n Clicked {clicks} times\n </button>\n )\n }\n \"\"\"\n\n _stylesheets = [css]\n\nConfettiButton()\n\n\n\n\n\n \n\n\n\n\nAs you can see we can use useState hooks to get and set parameter values reactively and can return React components from our render function.\n\n\nAnyWidget Compatibility\nWe’d like to give a shoutout to the anywidget project and especially its author, Trevor Manz, for valuable discussions that inspired many of the ideas behind these component classes and influenced the API. We also provide an AnyWidgetComponent class that mirrors the JavaScript (JS) API of AnyWidget, making it possible to reuse AnyWidget components natively in Panel.\nTo demonstrate this, we will fetch the JS implementation of the CarbonPlan AnyWidget directly from GitHub (though we advise against doing this in a production environment) and implement only the Python wrapper class:\n\nimport requests\n\nfrom panel.custom import AnyWidgetComponent\n\nclass Carbonplan(AnyWidgetComponent):\n _esm = requests.get('https://raw.githubusercontent.com/manzt/carbonplan-maps/3f8603042522e83fba0e7abddea63b0463a690e0/carbonplan_maps/widget.js').text\n\n source = param.String(allow_None=False)\n variable = param.String(allow_None=False)\n dimensions = param.Tuple(allow_None=False)\n height = param.String(default='300px')\n opacity = param.Number(default=1.0)\n colormap = param.String(default='warm')\n clim = param.Range(default=(-20, 30))\n region = param.Boolean(default=False)\n selector = param.Dict(default={})\n mode = param.String(default='texture')\n data = param.Parameter()\n\nCarbonplan(\n source=\"https://carbonplan-maps.s3.us-west-2.amazonaws.com/v2/demo/2d/tavg\",\n variable=\"tavg\",\n dimensions=(\"y\", \"x\"),\n sizing_mode='stretch_width',\n height='500px'\n)\n\n\n\n\n\n \n\n\n\n\n\n\nDeveloper Experience First\nWhen developing these component classes, we prioritized enhancing the developer experience. With watchfiles installed and the --dev flag enabled (formerly known as --autoreload), you can benefit from hot-reloading while developing your component.\nBelow is a demonstration of building a simple React form from scratch using Material UI components:\n\n\n\n\n\nCreate Native Components\nAnother thing we made sure of is that you can easily build components that follow the API specification for native Panel components, whether that is a Widget, Pane or Panel (i.e. layout). You can simply create mix-ins of the JSComponent, ReactComponent, PyComponent and the WidgetBase, PaneBase or ListPanel classes giving you a component that behaves just like a native Panel component.\nFor more information see some of our how-to guides here:\n\nHow-to create a custom pane\nHow-to create a custom widget\nHow-to create a custom layout\n\n\n\nSimple Compilation and Bundling\nAfter building a component, we aimed to make the bundling process for distribution as straightforward as possible. While loading external libraries from a CDN is fine during development, creating a minimized bundle for production is often a better choice. This can be done easily with the following command:\npanel compile form.py:Form\nThis command compiles a single component. To compile multiple components in one module into a single bundle, use:\npanel compile form.py\nThe only dependencies required are node.js and esbuild, which can be easily installed with conda or your preferred Node installer (e.g., npx).\nTo learn more about imports, compilation, and bundling, see the how-to guide."
},
{
"objectID": "posts/panel_release_1.5/index.html#native-fastapi-integration",
"href": "posts/panel_release_1.5/index.html#native-fastapi-integration",
"title": "Panel 1.5.0 Release",
"section": "Native FastAPI Integration",
"text": "Native FastAPI Integration\nFastAPI as a library is incredibly popular and we have received multiple requests to make it easier to integrate Panel with FastAPI. As of today that is a reality! Together with Philip Meier (@pmeier) from Quansight we created bokeh-fastapi, which allows the Bokeh server protocol to run inside a FastAPI application. By installing the additional package bokeh-fastapi, you can now run Panel apps natively on a FastAPI server, e.g. using uvicorn.\nTo get started simply pip install panel[fastapi] and very soon you’ll also be able to conda install -c conda-forge panel fastapi bokeh-fastapi.\nPanel provides two simple APIs for integrating your Panel applications with FastAPI.\nThe first a simple decorator to add a function defining a Panel app to the FastAPI application:\nimport panel as pn\n\nfrom fastapi import FastAPI\nfrom panel.io.fastapi import add_application\n\napp = FastAPI()\n\[email protected](\"/\")\nasync def read_root():\n return {\"Hello\": \"World\"}\n\n@add_application('/panel', app=app, title='My Panel App')\ndef create_panel_app():\n slider = pn.widgets.IntSlider(name='Slider', start=0, end=10, value=3)\n return slider.rx() * '⭐'\nNow we can run the application with:\nfastapi dev main.py # or uvicorn main:app\nAfter visiting http://localhost:8000/docs you should see the following output:\n\nOf course you can also add multiple applications at once, whether that is a Panel app declared in a script, a Panel object or a function as above:\nfrom panel.io.fastapi import add_applications\n\napp = FastAPI()\n\n...\n\nadd_applications({\n \"/panel_app1\": create_panel_app,\n \"/panel_app2\": pn.Column('I am a Panel object!'),\n \"/panel_app3\": \"my_panel_app.py\"\n}, app=app)\nRead more in the FastAPI how-to guide."
},
{
"objectID": "posts/panel_release_1.5/index.html#new-components",
"href": "posts/panel_release_1.5/index.html#new-components",
"title": "Panel 1.5.0 Release",
"section": "New Components",
"text": "New Components\nAs always, it wouldn’t be a new Panel release without at least a few new components being added to the core library.\n\nFileDropper\nThe new FileDropper widget is an advanced version of the FileInput widget with a host of exciting features:\n\nPreview of images and PDF files\nUpload progress bars\nChunked upload support, enabling the upload of files of any size\nImproved support for multiple files and directories\n\nGive it a try: drop some images here and watch the preview in action!\n\npn.widgets.FileDropper(height=400, multiple=True)\n\n\n\n\n\n \n\n\n\n\n\n\nTimePicker\nThe TimePicker complements the other various time and date based input widget providing a dedicated way to specify a time.\n\npn.Column(pn.widgets.TimePicker(value='13:27'), height=100)\n\n\n\n\n\n \n\n\n\n\n\n\nChatStep\nThe ChatStep provides a convenient way to progressively update a task being performed and collapse the output when done.\n\nstep = pn.chat.ChatStep(success_title='Task completed', width=300)\n\nwith step:\n step.stream('Working.')\n step.stream('Still working.')\n step.stream('Done.', replace=True)\n\nstep\n\n\n\n\n\n \n\n\n\n\n\n\nPlayer and DiscretePlayer\nWhile not entirely new the Player and DiscretePlayer widgets have gotten a lot of love in this release with the ability to give it a label:\n\npn.widgets.DiscretePlayer(options=[0, 1, 10, 100, 1000, 10000], height=100, name='Log scale')\n\n\n\n\n\n \n\n\n\n\nand also reduce the size of the buttons and create a more minimal UI:\n\npn.widgets.Player(start=0, end=10, show_loop_controls=False, show_value=True, visible_buttons=['pause', 'play'], width=150)"
},
{
"objectID": "posts/panel_release_1.5/index.html#chat-interface-ux-improvements",
"href": "posts/panel_release_1.5/index.html#chat-interface-ux-improvements",
"title": "Panel 1.5.0 Release",
"section": "Chat Interface UX improvements",
"text": "Chat Interface UX improvements\nAnother major area of focus for us this release was improving the UX of the chat components. Specifically we wanted to ensure that the experience of long chat feeds would be smooth and streaming long chunks of text would be efficient. To that end we implemented automatic diffing for chat messages, ensuring we only send the latest chunk of text (rather than sending all the text each time a new chunk was streamed in).\n\n(Please excuse the fact that the model output is junk 🙂)\nPlay around with our web-based LLM application here (implemented as a JSComponent)."
},
{
"objectID": "posts/panel_release_1.5/index.html#py.cafe-support",
"href": "posts/panel_release_1.5/index.html#py.cafe-support",
"title": "Panel 1.5.0 Release",
"section": "PY.CAFE Support",
"text": "PY.CAFE Support\nWe are very excited to announce that, as of today, Panel is officially supported on py.cafe. PY.CAFE is a platform that allows you to create, run, edit, and share Python applications directly in your browser. You can find our profile with a gallery here.\n\nA big thank you to the entire py.cafe team, and especially to Maarten Breddels, who proved that this could be done in just one afternoon."
},
{
"objectID": "posts/panel_release_1.5/index.html#improved-contributor-experience",
"href": "posts/panel_release_1.5/index.html#improved-contributor-experience",
"title": "Panel 1.5.0 Release",
"section": "Improved Contributor Experience",
"text": "Improved Contributor Experience\nLastly, this release significantly enhances the developer experience for Panel contributors. For a long time, building Panel, running tests, and generating documentation were challenging tasks. In this release, we have completely re-architected the developer workflow, leveraging the power of pixi.\nOn behalf of all current and future Panel contributors, we would like to extend a BIG THANK YOU 🙏 to Simon Høxbro Hansen for his incredible efforts in making this happen.\nFor more details, check out the updated developer guide."
},
{
"objectID": "posts/panel_release_1.5/index.html#changelog",
"href": "posts/panel_release_1.5/index.html#changelog",
"title": "Panel 1.5.0 Release",
"section": "Changelog",
"text": "Changelog\n\nFeatures\n\nAllow building custom ESM based JSComponent and ReactComponent (#5593)\nAdd Placeholder pane (#6790)\nAdd FileDropper widget (#6826)\nAdd ChatStep component to show/hide intermediate steps (#6617)\nAdd TimePicker widget (#7013)\nAdd PyComponent baseclass (#7051)\nAdd native support for running Panel on FastAPI server (#7205)\n\n\n\nEnhancements\n\nAllow callbacks after append and stream (#6805)\nEnable directory uploads with FileInput (#6808)\nMake autoreload robust to syntax errors and empty apps (#7028)\nAdd support for automatically determining optimal Tabulator.page_size (#6978)\nVarious typing improvements (#7081, #7092, #7094, #7132)\nDisplay value for player (#7060)\nOptimize rendering and scrolling behavior of Feed (#7101)\nImplement support for multi-index columns in Tabulator (#7108)\nAdd placeholder while loading to ChatFeed (#7042)\nAllow streaming chunks to HTML and Markdown panes (#7125)\nShow Player interval value on click (#7064)\nExpose Player options to scale and hide buttons (#7065)\nAdd on_keyup and value_input for CodeEditor (#6919)\nDetect WebGL support on BrowserInfo (#6931)\nTweak ChatMessage layout (#7209, #7266)\nAdd nested editor to Tabulator (#7251)\nSupport anchor links in HTML and Markdown panes (#7258, #7263)\n\n\n\nBug fixes\n\nEnsure Gauge is responsively sized (#7034)\nEnsure Tabulator sorters are correctly synced (#7036)\nEnsure Tabulator selection is consistent across paginated, sorted and filtered states (#7058)\nDo not propagate clicks on input elements in Card header (#7057)\nEnsure Tabulator range selection applies to current view (#7063)\nEnsure Tabulator.selection is updated when indexes change (#7066)\nEnsure Tabulator can be updated with None value (#7067)\nFix issues with PYTHONPATH in Jupyter Preview (#7059)\nEnsure Tabulator styling is correctly applied on multi-index (#7075)\nFix various scrolling related Tabulator issues (#7076)\nEnsure Tabulator data is updated after filters are changed (#7074)\nAllow controlling DataFrame pane header and cell alignment (#7082)\nHighlight active page in Tabulator using Fast Design (#7085)\nEnsure follow behavior works when streaming to paginated Tabulator (#7084)\nAvoid events boomeranging from frontend (#7093)\nCorrectly map Tabulator expanded indexes when paginated, filtered and sorted (#7103)\nEnsure custom HoloViews backends do not error out (#7114)\nEnsure events are always dispatched sequentially (#7128)\nEnsure 'multiselect' Tabulator.header_filter uses ‘in’ filter function (#7111)\nEnsure no content warning is not displayed when template is added (#7164)\nMake it easy to prompt user for input in ChatFeed (#7148)\nFix LaTeX pane MathJax rendering (#7188)\nEnsure OAuth expiry is numeric and can be compared (#7191)\nCorrectly detect max depth of NestedSelect if level is empty (#7194)\nMake --setup/--autoreload/--warm work with --num-procs (#6913)\nEnsure error rendering application does not crash server (#7223)\nRefactor state.notifications to fix pyodide (#7235)\nHandle setting None value on DateRangePicker (#7240)\nAdd header_tooltips parameter to Tabulator (#7241)\nFix issue using Tabulator.header_filter with recent Pandas versions (#7242)\nFix setting of Dial background (#7261)\nFix issues when loading multiple times in a Jupyter(Lab) session (#7269)\n\n\n\nCompatibility and Updates\n\nUpdate to Bokeh 3.5.x\nUpdate Tabulator to 6.2.1 (#6840)\nUpdate to latest Pyscript (2024.08.01) and Pyodide (0.26.2) (#7016)\nAdd compatibility for latest Textual (#7130)\n\n\n\nDocumentation\n\nUpdate Tabulator.ipynb to show correct version number of Tabulator (#7053)\nUpdate jupyterlite version (#7129)\nDescribe usage of pyscript editor (#7017)\nAdd pycafe deployment guide (#7183)\nAdd WebLLM example to gallery (#7265)\n\n\n\nDeprecation and API Warnings\n\nPasswordInput and TextAreaInput no longer inherit directly from TextInput (#6593)\nRemove deprecated panel.depends.param_value_if_widget function (#7202)"
},
{
"objectID": "posts/panel_release_1.3/index.html",
"href": "posts/panel_release_1.3/index.html",
"title": "Panel 1.3.0 Release",
"section": "",
"text": "What is Panel?\nPanel is an open-source Python library that lets you easily build powerful tools, dashboards and complex applications entirely in Python. It has a batteries-included philosophy, putting the PyData ecosystem, powerful data tables and much more at your fingertips. High-level reactive APIs and lower-level callback based APIs ensure you can quickly build exploratory applications, but you aren’t limited if you build complex, multi-page apps with rich interactivity. Panel is a member of the HoloViz ecosystem, your gateway into a connected ecosystem of data exploration tools.\nNew release!\nWe are very pleased to announce the 1.3.0 release of Panel! This release packs many exciting new features, specifically:\nSpecial thanks to our first time contributors @aktech, @meson800 and @monodera and returning contributors @cdeil, @pierrotsmnrd and @TheoMartin. We also want to highlight the contribution of our new core contributor @ahuang11 for developing the chat components and recognize @MarcSkovMadsen and @philippjfr for their efforts on testing and improving these new components. Finally we thank the entire core team including @sophiamyang, @Hoxbro, @MarcSkovMadsen, @maximlt, @ahuang11 and @philippjfr for their continued efforts.\nIf you are using Anaconda, you can get latest Panel with conda install panel , and using pip you can install it with pip install panel."
},
{
"objectID": "posts/panel_release_1.3/index.html#chat-components",
"href": "posts/panel_release_1.3/index.html#chat-components",
"title": "Panel 1.3.0 Release",
"section": "Chat Components",
"text": "Chat Components\nWith the huge amount of popularity of LLMs it is way overdue for Panel to add components that make it easy to interact with them. See the following trailer as a quick introduction:\n\n\n\nYour browser does not support the video tag.\n\nand find a variety of examples demonstrating the cabilities of these new features at Panel Chat Examples page, including examples using LangChain, OpenAI, Mistral, Llama, and RAG.\nIn Panel we want to focus on building general components that let you achieve what you need but also provide the flexibility to compose components as needed. Therefore the panel.chat subpackage consists of a number of components which are composable incudling the:\n\nChatMessage\nChatFeed\nChatInterface\n\nThese components build on each other as shown in the diagram below:\n\nAt the core is a ChatMessage which can encapsulate any other output and associates this with a user, a timestamp and reaction icons:\n\nmsg = pn.chat.ChatMessage('When did Panel add support for Chat components?', user='User')\n\nmsg\n\n\n\n\n\n \n\n\n\n\nThe ChatFeed adds support for composing multiple ChatMessages and a simply API for sending new messages:\n\nfeed = pn.chat.ChatFeed(msg)\n\nfeed.send('Chat components were added in v1.3.0!', user='Developer', avatar='👩')\n\nfeed\n\n\n\n\n\n \n\n\n\n\nLastly the ChatInterface extends the ChatFeed by adding a UI for interacting with the ChatFeed:\n\npn.chat.ChatInterface(*feed, width=600)\n\n\n\n\n\n \n\n\n\n\nFinally we have added basic support for integrating with LangChain via the PanelCallbackHandler, which we hope will eventually be merged into Langchain itself (if you’re a Langchain dev, call us 😊).\n\n\n\nYour browser does not support the video tag."
},
{
"objectID": "posts/panel_release_1.3/index.html#reactive-expressions-references",
"href": "posts/panel_release_1.3/index.html#reactive-expressions-references",
"title": "Panel 1.3.0 Release",
"section": "Reactive Expressions & References",
"text": "Reactive Expressions & References\nPanel 1.3.0 now requires Param 2.0 which we are releasing simultaneously. Not only does the new Param release clean up the namespace of all Panel objects but it also adds support for two major new capabilities:\n\nAllow passing parameters, widgets, expressions and bound functions as references to Panel components\nIntegrating support for reactive expressions using the param.rx API\n\nTo unpack this a little bit let’s play around with rx a little bit:\nslider = pn.widgets.IntSlider(start=0, end=7, value=3)\n\nslider.rx() ** 2\n\n\n\n \n\n\n\n\nThis very simple example demonstrates the core idea behind reactive expressions. It allows you to treat a dynamic reference, e.g. a widget value, as if it was the actual object, in this case an int. This allows you to build reactive pipelines using natural syntax. To discover more about reactive expressions, see the Param documentation.\nOf course this extends well beyond this simple example and when combined with the ability for Panel components to resolve references makes it possible to write complex interactive components using natural syntax:\ndataset = pn.widgets.Select(name='Pick a dataset', options={\n 'penguins': 'https://datasets.holoviz.org/penguins/v1/penguins.csv',\n 'stocks': 'https://datasets.holoviz.org/stocks/v1/stocks.csv'\n})\nnrows = pn.widgets.IntSlider(value=5, start=0, end=20, name='N rows')\n\n# Load the currently selected dataset and sample nrows from it\ndf = pn.bind(pd.read_csv, dataset).rx().sample(n=nrows)\n\n# Bind the current value of the `df` expression to the Tabulator widget\ntable = pn.widgets.Tabulator(df, page_size=5, pagination='remote')\n\npn.Row(pn.Column(dataset, nrows), table)\n\nUsing pn.bind we can dynamically load various datasets and then apply transformations by turning the result into a reactive expression, e.g. to sample a variable number of rows from the dataset. Lastly we can pass the resulting reactive expression to Tabulator which will automatically reflect the result of the expression.\nNot only can Panel now resolve such expressions but it can even resolve references nested inside another object:\nfont_size = pn.widgets.FloatSlider(start=6, end=24, value=12, name='Font Size')\ncolor = pn.widgets.ColorPicker(name='Color')\n\npn.Row(\n pn.Column(font_size, color),\n pn.pane.HTML('Hello World!', styles={'color': color, 'font-size': pn.rx('{}pt').format(font_size)})\n)"
},
{
"objectID": "posts/panel_release_1.3/index.html#enhancements-components",
"href": "posts/panel_release_1.3/index.html#enhancements-components",
"title": "Panel 1.3.0 Release",
"section": "Enhancements & Components",
"text": "Enhancements & Components\n\nOAuth improvements\nPanel has shipped with OAuth integration for a very long time. In this release we finally spent some time rationalizing the code and adding support for:\n\nAuthorization code and password based OAuth grant workflows for when you don’t want to issue a client secret for your Panel application\nAdding the ability to automatically refresh access_tokens whenever they expire using the --oauth-refresh-tokens (discover more here)\n\n\n\nAuthorization callbacks\nIf you are using OAuth or basic authentication with Panel you can now provide an authorization_callback that does not only allow you to either allow or deny a user access to a particular app but now also allows you to redirect them elsewhere. Discover more here.\n\n\nColormap Widget\nThe new ColorMap widget makes it easy to let users pick between multiple color palettes.\nfrom matplotlib.cm import Reds, Greens, Blues, viridis\n\ncmaps = {'Reds': Reds, 'Greens': Greens, 'Blues': Blues, 'viridis': viridis}\n\npn.widgets.ColorMap(options=cmaps, ncols=2)"
},
{
"objectID": "posts/panel_release_1.3/index.html#changelog",
"href": "posts/panel_release_1.3/index.html#changelog",
"title": "Panel 1.3.0 Release",
"section": "Changelog",
"text": "Changelog\n\nFeature\n\nIntegrate support for param reactive expressions and expose pn.rx (#5138, #5582)\nImplement ChatMessage, ChatFeed and ChatInterface components (#5333)\nUnify OAuth implementations and refresh access_token (#5627)\nAdd ColorMap widget (#5647)\n\n\n\nEnhancement\n\nAdd unit to widget in HoloViews pane if provided (#5535)\nAllow registering global on_session_destroyed callback (#5585)\nImplement auto_grow on TextAreaInput (#5592)\nAdd ability to redirect users from authorization callback (#5594)\nAdd support for Path object in FileDownload (#5607)\nAdd authorization_code and password based OAuth login handlers (#5547)\nAdd format to EditableFloatSlider and EditableIntSlider (#5631)\nAdd support for decorating async functions with pn.io.cache (#5649)\nMap param.Bytes to FileInput widget (#5665)\n\n\n\nBug fixes\n\nFixes for Column invisible scroll_button taking space (#5532)\nGuard undefined values from being set on BrowserInfo (#5588)\nFix thumbnails and use Panel design on index page (#5595)\nFix regressions in TextEditor caused by migration to shadow DOM (#5609)\nSync location state from request (#5581)\nFix Select widget label offset in Material Design (#5639)\nOverride token contents when reusing sessions (#5640)\nFix patching a table with a DataFrame with a custom index (#5645)\nSet FloatPanel status correctly on initialization (#5651)\nFix patching table with pd.Timestamp values (#5650)\nEnsure notifications and browser_info are loaded when HoloViews is loaded\nGracefully handle resolution of invalid paths in _stylesheets (#5666)\nHandle patching tables with NaT values (#5675)\n\n\n\nCompatibility\n\nUpgrade to Param 2.0\nCompatibility with Bokeh 3.3.0\n\n\n\nDocumentation\n\nImproved docs on deploying with GCP (#5531)\nAdd Streamlit migration guide for chat components (#5670)"
},
{
"objectID": "posts/panel_release_0.7/index.html",
"href": "posts/panel_release_0.7/index.html",
"title": "Panel 0.7.0 Release",
"section": "",
"text": "We are very pleased to announce the 0.7 release of Panel, which brings a ton of new features, enhancements, and many important bug fixes. Many thanks to the 20 contributors to this release (listed at the bottom). This release introduced only minimal changes in existing APIs, as Panel progresses towards a more stable phase of development. One of the major goals in this release was better compatibility with the Jupyter ecosystem, which culminated in the ipywidgets support. The next major release will be the 1.0 release, which will involve some minor API cleanup and a number of long anticipated features, including a number of polished inbuilt templates and the ability to serve existing Jupyter widgets as part of a Panel app."
},
{
"objectID": "posts/panel_release_0.7/index.html#ipywidget-support",
"href": "posts/panel_release_0.7/index.html#ipywidget-support",
"title": "Panel 0.7.0 Release",
"section": "ipywidget support",
"text": "ipywidget support\nPanel is built on top of Bokeh, which ships with its own standalone server and has also provided some degree of integration in Jupyter. Panel itself has relied on some custom extensions for Jupyter support which don’t necessarily work in some non-standard notebook and Jupyter environments such as the recently released Voilà dashboard server. After working with the Jupyter and Bokeh developers we have now released the jupyter_bokeh library and extension which allows displaying Bokeh and Panel models as ipywidgets and therefore ensures that bi-directional communication works in any environment that supports the Jupyter widget protocol.\nIn Panel we can enable this globally using pn.extension(comm='ipywidgets') or by explicitly converting a panel object to an ipywidget using pn.ipywidget(obj).\n\nimport ipywidgets as ipw\n\naccordion = ipw.Accordion(children=[\n pn.ipywidget(pn.Column(\n pn.widgets.FloatSlider(),\n pn.widgets.TextInput()\n )),\n pn.ipywidget(hv.Curve([1, 2, 3])),\n pn.ipywidget(hv.Area([1, 2, 3]).opts(responsive=True, min_height=300))\n])\n\naccordion.set_title(0, 'Widgets')\naccordion.set_title(1, 'Curve')\naccordion.set_title(2, 'Area')"
},
{
"objectID": "posts/panel_release_0.7/index.html#support-for-.jscallback-and-improved-.jslink",
"href": "posts/panel_release_0.7/index.html#support-for-.jscallback-and-improved-.jslink",
"title": "Panel 0.7.0 Release",
"section": "Support for .jscallback and improved .jslink",
"text": "Support for .jscallback and improved .jslink\nPanel has long had support for linking the parameters of two objects in Javascript using the .jslink method. In this release .jslink can now be invoked bi-directionally:\n\nkwargs = dict(start=0, end=1, step=0.1, align='center')\nslider = pn.widgets.FloatSlider(name='Slider', **kwargs)\nspinner = pn.widgets.Spinner(name='Spinner', **kwargs)\n\nslider.jslink(spinner, value='value', bidirectional=True)\n\npn.Row(slider, spinner)\n\n\n\n\n\n\n\n \n\n\n\n\nThere is also now a .jscallback method, for generating arbitrary JavaScript callbacks in response to some change to a property:\n\nvalue1 = pn.widgets.Spinner(value=0, width=75)\noperator = pn.widgets.Select(value='*', options=['*', '+'], width=50, align='center')\nvalue2 = pn.widgets.Spinner(value=0, width=75)\nbutton = pn.widgets.Button(name='=', width=50)\nresult = pn.widgets.StaticText(value='0', width=50, align='center')\n\nbutton.jscallback(clicks=\"\"\"\nif (op.value == '*') \n result.text = (v1.value * v2.value).toString()\nelse\n result.text = (v1.value + v2.value).toString()\n\"\"\", args={'op': operator, 'result': result, 'v1': value1, 'v2': value2})\n\npn.Row(value1, operator, value2, button, result)"
},
{
"objectID": "posts/panel_release_0.7/index.html#improved-pipelines",
"href": "posts/panel_release_0.7/index.html#improved-pipelines",
"title": "Panel 0.7.0 Release",
"section": "Improved Pipelines",
"text": "Improved Pipelines\nPreviously the Pipeline class allowed setting up linear pipelines to implement a multi-stage workflow. The Pipeline class was completely overhauled in this release to make it easy to lay out the individual components yourself and most importantly to set up an arbitrary graph of pipeline stages. Pipelines now allow diverging and converging branches for more flexible workflows than before. Below is the definition and the overview of a complex graph-based pipeline with diverging and converging stages:\ndag = pn.pipeline.Pipeline()\n\ndag.add_stage('Input', Input)\ndag.add_stage('Multiply', Multiply)\ndag.add_stage('Add', Add)\ndag.add_stage('Result', Result)\ndag.add_stage('Export', Export)\n\ndag.define_graph({'Input': ('Multiply', 'Add'), 'Multiply': 'Result', 'Add': 'Result', 'Result': 'Export'})"
},
{
"objectID": "posts/panel_release_0.7/index.html#improved-templates",
"href": "posts/panel_release_0.7/index.html#improved-templates",
"title": "Panel 0.7.0 Release",
"section": "Improved Templates",
"text": "Improved Templates\nSince Panel 0.6 it has been possible to declare custom Templates to take full control over the layout and visual styling of the application or dashboard. In this release we now support rendering custom templates in a notebook and even declaring separate templates for notebook and server usage. In the next release we will focus on providing a number of custom templates built on common JS/CSS frameworks such as Materialize UI, GridStack, and reveal.js."
},
{
"objectID": "posts/panel_release_0.7/index.html#new-components-1",
"href": "posts/panel_release_0.7/index.html#new-components-1",
"title": "Panel 0.7.0 Release",
"section": "New Components",
"text": "New Components\nThis release includes a variety of new components contributing to the growing set of widgets, panes, and layouts showcased in the reference gallery.\n\nProgress bars\nThe Progress widget displays the progress towards some target based on the current value and the max value. If no value is set the Progress widget is in indeterminate mode and will either be static or animated depending on the active parameter. If you are able to measure or estimate how much progress is remaining on an operation, you can use this widget to give feedback to the user.\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nDataFrame widget\nThe DataFrame widget allows editing an existing pandas DataFrame using a custom DataTable. Here, each of the numbers and strings are user-editable, which will be reflected in the contents of the DataFrame in Python when there is a live server available.\n\nimport pandas as pd\n\ndf = pd.DataFrame({'int': [1, 2, 3], 'float': [3.14, 6.28, 9.42], 'str': ['A', 'B', 'C']}, index=[1, 2, 3])\n\npn.widgets.DataFrame(df, widths={'index': 10, 'int': 10, 'float': 50, 'str': 100}, width=200)\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nPasswordInput & TextAreaInput widgets\nNew PasswordInput and TextAreaInput make it possible to enter hidden text and provide multi-line text inputs to Panel:\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nDataFrame Pane\nThe DataFrame pane renders Pandas, Dask and Streamz dataframes while exposing a range of options to control the formatting.\n\n\n\nStreamz Pane\nThe Streamz pane accepts any streamz Stream to allow streaming arbitrary objects. The basic example in the documentation demonstrates how to quickly put together a streaming vega plot:\n\n\n\n\n\nVideo Pane\nThe Video pane uses a standard HTML5 media player widget to display any mp4, webm, or ogg video file. Like the corresponding Audio pane, the current timestamp, volume, and play state can be toggled from Python and Javascript:\n\nvideo = pn.pane.Video('https://sample-videos.com/video123/mp4/720/big_buck_bunny_720p_1mb.mp4',\n width=640, height=480)\n\nvideo\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nVTKVolume\nThe VTKVolume pane uses the vtk.js library to render interactive, volumetric 3D plots with control over opacity and the color curve.\n\n\n\n\n\nGridBox layout\nThe new GridBox layout complements the existing Row, Column, Tabs, and GridSpec layouts in that it allows wrapping the list of items provided to it by the desired number of rows or columns:\n\nrcolor = lambda: \"#%06x\" % random.randint(0, 0xFFFFFF)\n\nbox = pn.GridBox(*[pn.pane.HTML(background=rcolor(), width=50, height=50) for i in range(22)], ncols=4)\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nDivider\nThe new Divider component also nicely complements the existing Spacer components making it easy to draw a visual divider between vertically stacked components.\n\npn.Column(\n pn.layout.Divider(),\n pn.Row(pn.layout.HSpacer(), '# Title', pn.layout.HSpacer()),\n pn.layout.Divider()\n)"
},
{
"objectID": "posts/panel_release_0.7/index.html#contributors",
"href": "posts/panel_release_0.7/index.html#contributors",
"title": "Panel 0.7.0 Release",
"section": "Contributors",
"text": "Contributors\nMany thanks to the many contributors to this release:\n\nPhilipp Rudiger (@philippjfr): Maintainer & lead developer\nXavier Artusi (@xavArtley): VTK support\nJames A. Bednar (@jbednar): Documentation\nAndrew Tolmie (@DancingQuanta): FileInput widget\nArne Recknagel (@a-recknagel): Python 3.8 support, build improvements\nJulius Winkelmann (@julwin): TextAreaInput, PasswordInput\nPav A (@rs2): Example notebooks\nEd Jung (@xtaje): Default values fix\nKarthick Perumal (@Karamya): Audio widget enhancements\nChristopher Ball (@ceball): Build and doc improvements\nAndrew Huang (@ahuang11): Disabling widget boxes\nEduardo Gonzalez (@eddienko): Fixing Django docs\nJacob Barhak (@Jacob-Barhak): Updated Markdown docs\nJean-Luc Stevens (@jstevens): Cross-selector fixes\nJulia Signell (@jsignell): Documentation fixes\nLandung “Don” Setiawan (@lsetiawan): StoppableThread improvements\nMateusz Paprocki (@mattpap): Build infrastructure\nMaxime Borry (@maxibor): Widget fixes\nStefan Farmbauer (@RedBeardCode): File-like object support on images\n@kleavor: Fixed GridSpec override behavior"
},
{
"objectID": "posts/panel_release_0.13/index.html",
"href": "posts/panel_release_0.13/index.html",
"title": "Panel 0.13.0 Release",
"section": "",
"text": "What is Panel?\nPanel is an open-source library that lets you create custom interactive web apps and dashboards by connecting widgets to plots, images, tables, and text - all while writing only Python!\nPanel integrates seamlessly with your existing work:\nPlease check out the Panel website to find out more.\nNew release!\nWe are very pleased to announce the 0.13 release of Panel! This release focuses on adding a number of powerful features requested by our users, including:\nHowever, as Panel is moving towards a 1.0 release the large number of bug fixes are almost of equal importance. For a full overview of the changes in this release view the release notes.\nMany, many thanks to everyone who filed issues or contributed to this release. In particular we would like to thank @nghenzi, @Stubatiger, @hyamanieu, @samuelyeewl, @ARTUSI, @pmav99, @Prashant0kgp, @L8Y, @ingebert, @rahulporuri, @lemieux, @blelem, @raybellwaves, @sdc50, @sophiamyang, @gnowland, @govinda18, @maartenbreddels, @andriyor, @j-r77, @robmarkcole, @douglas-raillard-arm, @Kadek, @joelostblom for contributing various fixes and improvements. Special thanks for the growing list of core contributors and maintainers including @jbednar, @xavArtley, @Hoxbro, @philippjfr, @maximlt, @MarcSkovMadsen and @philippjfr for continuing to push the development of Panel.\nIf you are using Anaconda, you can get the latest Panel with conda install -c pyviz panel , and using pip you can install it with pip install panel."
},
{
"objectID": "posts/panel_release_0.13/index.html#roadmap",
"href": "posts/panel_release_0.13/index.html#roadmap",
"title": "Panel 0.13.0 Release",
"section": "Roadmap",
"text": "Roadmap\nThis release has included a ton of great features and likely marks the last minor release before the Panel 1.0 release. Note that 1.0 will introduce major changes and we will be looking to you to provide feedback and help test the release. So look out for announcements of alpha, beta and release candidate releases and help make sure Panel 1.0 will be the success we hope it will be.\n\nDocumentation & Website\nThe Panel documentation has slowly evolved over time with new content and material added whenever new features were added. This means that we never did a full review of the documentation and considered how best to introduce users to the fundamental concepts. Before the 1.0 release we are planning to do a complete overhaul of the documentation and modernize the website.\n\n\nExporting to WASM\nAs highlighted above we now have support for running Panel applications entirely in the browser via Jupyterlite and Pyodide. In the future we hope to extend this support to directly export your existing Panel applications to a standalone HTML file that will run your Python application entirely clientside in your browser.\n\n\nNative applications\nThanks to recent collaboration with the brilliant folks at Quansight and the Beeware project we have a basic prototype for running Panel apps in a native application. We hope to integrate this work into Panel to eventually allow you to build installers for the major operating systems (Linux, OSX and Windows) and hopefully also mobile platforms including iOS and Android.\n\n\nRewrite of the layout engine\nPanel is built on top of Bokeh which was originally a plotting library but included an extremely powerful server architecture that has allowed us to build this entire ecosystem on top of. One of the legacies of Bokeh being primarily a plotting library was that it included a layout engine to ensure plots could be easily aligned. Unfortunately this also had severe downsides, specifically since this so called “managed layout” had to perform expensive computations to measure the size of different components on the page. This is why when you build complex nested layouts using rows, columns and grids you could sometimes slow down your application.\nBokeh has now begun replacing this managed layout with a CSS based unmanaged layout, which will free us from the performance bottlenecks of the past. This will result in a bright new future for Panel but it may also be also be a little disruptive in the short term. As soon as development versions of Bokeh 3.0 and Panel 1.0 are available we would therefore appreciate if you could provide us with feedback about any regressions related to layouts in your own applications so we can minimize the upgrade path.\n\n\nCSS & Styling\nAnother major change resulting from the upgrade to Bokeh 3.0 will be in the way styling is managed. In the past you had the ability to modify styling of Panel/Bokeh components by constructing somewhat brittle CSS rules. This will now be a thing of the past as we will expose the stylesheets for all components directly in Python. This will afford much greater and simplified control over the styling of components but will also disrupt anyone who relied on applying CSS stylesheets directly. We again hope to minimize the disruptions related to this change and will provide a detailed migration guide.\n\n\nHelp us!\nPanel is an open-source project and we are always looking for new contributors. Join us the discussion on the Discourse and we would be very excited to get you started contributing! Also please get in touch with us if you work at an organization that would like to support future Panel development, fund new Panel features, or set up a support contract.\n\n\nSponsors\nMany thanks to our existing sponsors:"
},
{
"objectID": "posts/panel_release_0.11/index.html",
"href": "posts/panel_release_0.11/index.html",
"title": "Panel 0.11.0 Release",
"section": "",
"text": "What is Panel?\nPanel is an open-source library that lets you create custom interactive web apps and dashboards by connecting widgets to plots, images, tables, and text - all while writing only Python!\nPanel integrates seamlessly with your existing work:\nPlease check out the Panel website to find out more.\nNew release!\nWe are very pleased to announce the 0.11 release of Panel! This release focuses on adding a number of powerful features requested by our users, including:\nCrucially this release also provides compatibility with Bokeh>=2.3. For a full overview of the changes in this release view the release notes.\nMany, many thanks to the people who contributed to this release, including @philippjfr (author, maintainer, release manager), @MarcSkovMadsen, @xavArtley, @hyamanieu, @cloud-rocket, @kcpevey, @kaseyrussell, @miliante, and @AjayThorve.\nIf you are using Anaconda, you can get the latest Panel with conda install -c pyviz panel , and using pip you can install it with pip install panel."
},
{
"objectID": "posts/panel_release_0.11/index.html#autoreload",
"href": "posts/panel_release_0.11/index.html#autoreload",
"title": "Panel 0.11.0 Release",
"section": "Autoreload",
"text": "Autoreload\nDeveloping applications is an iterative process but previously it could be quite cumbersome to do so effectively when editing the application in an editor. To improve this we have added a --autoreload flag to the panel serve CLI command. When autoreload is set the source files in the script are watched and the browser view is reloaded when a file is changed.\n\n\n\n\nThe --autoreload option even handles error conditions gracefully. If the application script cannot be executed the error is displayed in place of the application:"
},
{
"objectID": "posts/panel_release_0.11/index.html#loading-parameter",
"href": "posts/panel_release_0.11/index.html#loading-parameter",
"title": "Panel 0.11.0 Release",
"section": "Loading parameter",
"text": "Loading parameter\nTo provide users of an application or dashboard with a good user experience and a feeling of responsiveness loading spinners and indicators are very important. Therefore this release has added a loading parameter to all Panel components which overlays the component with a spinner. Panel provides a selection of spinner types to choose from, which can be controlled globally using the config object:\n\npn.config.loading_spinner: The style of the global loading indicator, e.g. ‘arcs’, ‘bars’, ‘dots’, ‘petals’.\npn.config.loading_color: The color of the global loading indicator as a hex color, e.g. #6a6a6a.\n\n\npn.Row(*(pn.pane.SVG(open(pn.io.resources.ASSETS_DIR / f'{spinner}_spinner.svg').read().format(color='green'), height=200, width=200)\n for spinner in pn.config.param.loading_spinner.objects))\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nThe loading parameter can be controlled from Python to indicate a component is loading but can also be used directly from Javascript. Below you can see a demo of the default loading indicator and toggle it on and off using the jslinked checkbox:\n\nhtml = pn.pane.HTML(width=200, height=200, background='black', loading=True)\n\npn.Column(\n html.controls(['loading'], jslink=True)[1],\n html\n)"
},
{
"objectID": "posts/panel_release_0.11/index.html#templates",
"href": "posts/panel_release_0.11/index.html#templates",
"title": "Panel 0.11.0 Release",
"section": "Templates",
"text": "Templates\nIn the 0.10 release Panel introduced the concept of easily reusable templates and shipped a number of default templates. In this release the templates were further polished to achieve a more consistent look and feel when using the DarkTheme. Additionally we made it possible to add custom CSS and JS files directly on a template using the Template.config object, making it possible to add different resources to different routes in an application.\nFinally we added a new Fast UI based templates to join the lineup of templates provided by Panel.\n\nFastListTemplate: Builds on the Fast UI framework, making it easy to build polished looking applications and dashboards.\n\n\n\n\n\nFastGridTemplate: Builds on the Fast UI framework and react grid layouts, making it easy to build responsive, resizable and draggable grid layouts."
},
{
"objectID": "posts/panel_release_0.11/index.html#components",
"href": "posts/panel_release_0.11/index.html#components",
"title": "Panel 0.11.0 Release",
"section": "Components",
"text": "Components\nThis release adds a number of new components to include in your applications and dashboards.\n\nTabulator widget\nPowerful data tables or grids are an essential part of many data-centric applications and this release includes the feature-rich Tabulator component. This new table or data-grid is built on the Tabulator.js library, which is highly extensible, performant and feature rich.\n\ndf = pd.DataFrame(np.random.randn(1000, 4), columns=list('ABCD'))\n\ntabulator = pn.widgets.Tabulator(\n df, pagination='remote',\n frozen_columns=['index'],\n selectable='checkbox',\n page_size=10\n)\n\n# Pandas styling API\ntabulator.style.applymap(lambda v: 'color: green' if v > 0 else 'color: red')\n\ntabulator\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nNote that the pagination requires a live server to dynamically fetch new data.\nSome highlights of include:\n\nRich formatters and editors\nIntelligent column layout and resizing\nPandas .style API to achieve custom look and feel\nRemote pagination support to handle very large tables\nWide range of themes to choose from\nSupport for freezing and grouping columns and rows\nPowerful filtering API\nAbility to download table data directly in Javascript\nEfficient streaming and patching of data\n\nTo see more detail find the documentation in the Panel reference guide.\n\n\nFINOS Perspective\nPerspective is an interactive visualization component for large, real-time datasets. Originally developed for J.P. Morgan’s trading business, Perspective makes it simple to build real-time & user configurable analytics entirely in the browser. The Perspective component\n\nperspective = pn.pane.Perspective(\n df.cumsum(), plugin='d3_y_line', columns=['A', 'B', 'C', 'D'], theme='material-dark',\n sizing_mode='stretch_width', height=500\n)\n\nperspective\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nSee more details in the Perspective reference gallery entry.\n\n\nIDOM support\nIDOM is a Python library for defining and controlling interactive webpages. It allows us to write interactive HTML components directly from Python and embed those in a Panel application, e.g. below we define a Slideshow component consisting of an img element with a callback which advances the image index on click.\n\[email protected]\ndef Slideshow():\n index, set_index = idom.hooks.use_state(0)\n\n def next_image(event):\n set_index(index + 1)\n\n return idom.html.img(\n {\n \"src\": f\"https://picsum.photos/800/300?image={index}\",\n \"style\": {\"cursor\": \"pointer\"},\n \"onClick\": next_image,\n }\n )\n\npn.pane.IDOM(Slideshow, height=300);\n\n\n\n\nSee more details in the IDOM reference gallery entry.\n\n\nTrend indicator\nA common need for dashboards is communicating key performance indicator (KPIs) in a visually clean form. This release adds the Trend indicator to the existing lineup of indicators. The Trend indicator shows a number, a change indicator and a plot and responsively resizes to fill the available space. It also provides methods to stream new data to the view:\n\ntrend = pn.indicators.Trend(\n title=\"Panel Users\",\n plot_type='line',\n data={\"x\": [0, 1, 2, 3, 4, 5], \"y\": [300, 3800, 3700, 3800, 3900, 4000]},\n height=300,\n width=300\n)\n\ncontrols = trend.controls(jslink=True).clone(scroll=True, height=355)\n\npn.Row(controls, trend)\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nThe ability to stream allows us to performantly update many views at once:\n\n\n\n\n\nTextToSpeech and SpeechToText\nThe TextToSpeech and SpeechToText widgets as their nams suggests turn text into speech and speech into text using the browser APIs."
},
{
"objectID": "posts/panel_release_0.11/index.html#roadmap",
"href": "posts/panel_release_0.11/index.html#roadmap",
"title": "Panel 0.11.0 Release",
"section": "Roadmap",
"text": "Roadmap\nThis release has included a ton of great features but many of the roadmap items from the previous release are still open.\n\nCustom components\nWith the IDOM pane users can now build custom HTML components however in the future we also want to provide expert users with the power to develop their own HTML objects (including custom WebComponents), using native Panel implementation.\n\n\nTemplated layouts\nComplementing the ability to define individual custom components, we want to allow users to declare custom layouts by writing small HTML template strings the components will be inserted into. This will make it possible to leverage custom CSS or JS frameworks, e.g. to build custom types of responsive grids that can be used just like the current Panel layouts (Row, Column, etc.).\n\n\nResponsive grids\nIn addition to allowing users to build custom layouts using their favorite CSS/JS frameworks, we also want to ship a well-suported responsive grid layout that reflows components on the page based on the size of the browser tab. Reflowing will make it much easier to provide a great experience on mobile devices.\n\n\nBetter debugging and profiling\nWe also want to make the process of desiging, building, debugging, and optimizing apps easier. We plan to develop new tools to visualize Panel and Param callback and dependency graphs, to help developers understand how data and events propagate through their panels. To help them identify performance bottlenecks, these graphs will be annotated with timing information so that the slow steps can easily be identified.\n\n\nDocumentation overhaul\nAs we approach a Panel 1.0 release we want to overhaul the documentation so it becomes much easier to find the information you are looking for.\n\n\nHelp us!\nPanel is an open-source project and we are always looking for new contributors. Join us the discussion on the Discourse and we would be very excited to get you started contributing! Also please get in touch with us if you work at an organization that would like to support future Panel development, fund new Panel features, or set up a support contract.\n\n\nSponsors\nMany thanks to our existing sponsors:\n\n\n\n\n \n\n\n \n\n\n\n</div>"
},
{
"objectID": "posts/panel_announcement/index.html",
"href": "posts/panel_announcement/index.html",
"title": "Panel Announcement",
"section": "",
"text": "A high-level app and dashboarding solution for the PyData ecosystem.\nAuthor: Philipp Rudiger\nPanel is a new open-source Python library that lets you create custom interactive web apps and dashboards by connecting user-defined widgets to plots, images, tables, or text. It is the culmination of our multi-year effort to connect data scientists with tools for deploying the output of their analysis and models with internal or external consumers of the analysis without having to learn completely different technology stacks or getting into the weeds of web development. Panel can already be installed using either conda install -c pyviz panel or pip install panel, and like all other PyViz projects it is entirely open-source and BSD-3 licensed. To get started visit the website and find the Panel code on GitHub.\nThe main aim behind Panel was to make it as easy as possible to wrap the outputs of existing tools in the PyData ecosystem as a control panel, app, or dashboard, ensuring that users can seamlessly work with the analysis and visualization tools they are already familiar with. Secondly, Panel aims to make it trivial to go from prototyping a little app to deploying it internally within an organization or sharing it publicly with the entire internet."
},
{
"objectID": "posts/panel_announcement/index.html#architecture",
"href": "posts/panel_announcement/index.html#architecture",
"title": "Panel Announcement",
"section": "Architecture",
"text": "Architecture\nPanel is built on top of two main libraries:\n\nBokeh provides the model-view-controller framework on which Panel is built, along with many of the core components such as the widgets and layout engine\nParam provides a framework for reactive parameters which are used to define all Panel components.\n\nThe choice to build an API on top of Bokeh instead of simply extending it was driven by a number of core requirements. One of the most important was the ability to transition seamlessly between notebook and deployed server contexts, and doing so efficiently and in a scalable way. Another was the flexibility afforded by being able to dynamically generate a Bokeh representation for each view of a Panel object, encouraging reuse and composability of components. A third reason was to make it clear that Panel supports any viewable Python object, including plots from dozens of different libraries, not just Bokeh plots (Panel uses Bokeh internals and technology, but in no way assumes that you will use it with Bokeh plots).\nMost importantly, however, we wanted to design an API that provides a high degree of both flexibility and simplicity. Many of the most common operations for displaying, saving, and serving a dashboard are exposed directly on Panel objects and uniformly across them, making it simpler to work with them. Additionally, updating and even dynamically adding/removing/replacing the indvidual components of a dashboard are as easy as manipulating a list or dictionary in Python. Of course, Panel should not be seen to be in competition with Bokeh; it simply provides higher-level abstractions on top of Bokeh. If needed, Bokeh components can easily be used from within Panel, and Panel components can easily be converted into Bokeh models which can be embedded in a larger Bokeh application."
},
{
"objectID": "posts/panel_announcement/index.html#comparison-to-other-dashboarding-and-widget-libraries",
"href": "posts/panel_announcement/index.html#comparison-to-other-dashboarding-and-widget-libraries",
"title": "Panel Announcement",
"section": "Comparison to other dashboarding and widget libraries",
"text": "Comparison to other dashboarding and widget libraries\nPanel is a new library in this space but it is heavily inspired by existing concepts and technologies that have in many cases been around for decades. The three main inspirations for Panel were R’s Shiny library, Jupyter’s ipywidgets library, and Plotly’s Dash and we owe all three libraries/ecosystems much gratitude for pioneering.\n\nShiny\nFor anyone who performs analysis in the R programming language, Shiny provides an incredibly powerful and well polished framework for building web applications. It sets an incredibly high bar, from which Panel has taken large amounts of inspiration. In particular, the reactive patterns in Panel are closely modeled on Shiny, and Panel hopes to provide a similarly easy entrypoint for developing web applications in the Python ecosystem. Despite the similarities, Panel is not merely a Shiny clone for Python. In addition to the different constraints imposed by a different language, Panel takes a much more explicit approach toward the UI layout, which is usually separated into a separate file from the business logic in Shiny.\n\n\nJupyter/ipywidgets\nThe Jupyter ecosystem has led to an explosion in the ability to share and disseminate the results of analysis and been a major driver in pushing Python as the most important programming language in scientific analysis, data science, and machine learning. Within the Jupyter ecosystem, the ipywidgets library has provided the foundation for building interactive components and embedding them in a notebook. The community that has developed around this ecosystem has been a major inspiration and many core ideas in Panel are built on concepts popularized by these libraries, including the ability of objects to display themselves with rich representations, easily defining links between components in JS code, and Panel’s interact API. The main difference between Panel and ipywidgets is that the Panel architecture is not closely coupled to the IPython kernel that runs interactive computations in Jupyter. Although Panel fully supports operation in Jupyter notebooks, it is based on a generalized Python/JS communication method that is also fully supported on standalone non-Jupyter servers, making Panel apps work equally well inside and outside of Jupyter contexts.\n\n\nDash\nLike Panel, Plotly’s 2017 Dash library allows building very complex and highly polished applications straight from Python. Dash is also built on a reactive programming model that (along with Shiny) was a big inspiration for some of the features in Panel. Panel and Dash are quite different in other ways, though. Dash is (by design) focused specifically on support for Plotly plots, while Panel is agnostic about what objects are being displayed, and is designed to support whatever visualization or analysis tools are most appropriate for your workflows. Dash also typically requires much more detailed knowledge of low-level web development, while Panel allows users to simply drop in their components, building a usable dashboard in just a few lines of Pythonic code."
},
{
"objectID": "posts/panel_announcement/index.html#open-source-license-community",
"href": "posts/panel_announcement/index.html#open-source-license-community",
"title": "Panel Announcement",
"section": "Open source license & Community",
"text": "Open source license & Community\nPanel is BSD licensed and therefore free to use and modify by anyone and everyone. We built Panel to make our consulting work easier and give the individuals in those organization more power, but developing something among a small group of developers only goes so far. We believe everyone benefits when communities join their efforts to build tools together. So if you are interested in contributing to Panel or even just have suggestions for features, fixes, and improvements, join us on GitHub or Gitter.\nThanks for checking out Panel! We will be giving a talk and tutorial about it at SciPy 2019 in July and are actively working on building further materials, including more demos, tutorials, and examples in the coming weeks and months!"
},
{
"objectID": "posts/panel_announcement/index.html#further-resources",
"href": "posts/panel_announcement/index.html#further-resources",
"title": "Panel Announcement",
"section": "Further resources",
"text": "Further resources\n\nOur documentation is hosted at https://panel.pyviz.org\nThe main development repository for Panel is on GitHub\nJoin us on Twitter @PyViz_org\nFind a collection of demos and examples on GitHub\n\n\nTalks\n\nEasy Dashboards for Any Visualization in AE5, with Panel\nRapid Prototyping and Deployment Using the PyViz Stack and Anaconda Enterprise\nVisualizing & Analyzing Earth Science Data Using PyViz & PyData"
},
{
"objectID": "posts/lumen_ai_announcement/index.html#what-is-it",
"href": "posts/lumen_ai_announcement/index.html#what-is-it",
"title": "Lumen AI Announcement",
"section": "What is it?",
"text": "What is it?\nLumen is a fully open-source and extensible agent based framework for chatting with data and for retrieval augmented generation (RAG). The declarative nature of Lumen’s data model makes it possible for LLMs to easily generate entire data transformation pipelines, visualizations, and other many other types of output. Once generated, the data pipelines and visual output can be easily serialized, making it possible to share them, to continue the analysis in a notebook, and/or build entire dashboards.\n\nGenerate SQL: Generate data pipelines on top of local or remote files, SQL databases or your data lake.\nProvide context and embeddings: Give Lumen access to your documents to give the LLM the context it needs.\nVisualize your data: Generate everything from charts to powerful data tables or entire dashboards using natural language.\nInspect, validate and edit results: All LLM outputs can easily be inspected for mistakes, refined, and manually edited if needed.\nSummarize results and key insights: Have the LLM summarize key results and extract important insights.\nCustom analyses, agents and tools: Extend Lumen custom agents, tools, and analyses to generate deep insights tailored to your domain.\n\nLumen sets itself apart from other agent-based frameworks in that it focuses on being fully open and extensible. With powerful internal primitives for expressing complex data transformations, the LLM can gain insights into your datasets right out of the box, and can be further tailored with custom agents, analyses and tools to empower even non-programmers to perform complex analyses without having to code. The customization makes it possible to generate any type of output, allow the user and the LLM to perform analyses tailored to your domain, and look up additional information and context easily. Since Lumen is built on Panel it can render almost any type of output with little to no effort, ensuring that even the most esoteric usecase is easily possible.\nThe declarative Lumen data model further sets it apart from other tools, making it easy for LLMs to populate custom components and making it easy for the user to share the results. Entire multi-step data transformation pipelines–be they in SQL or Python–can easily be captured and used to drive custom visualizations, interactive tables and more. Once generated, the declarative nature of the Lumen specification allows them to be shared, reproducing them in a notebook or composing them through a drag-and-drop interface into a dashboard."
},
{
"objectID": "posts/lumen_ai_announcement/index.html#why-did-we-build-it",
"href": "posts/lumen_ai_announcement/index.html#why-did-we-build-it",
"title": "Lumen AI Announcement",
"section": "Why did we build it?",
"text": "Why did we build it?\nIt isn’t news that many organizations struggle to derive real insights from their data. This is either because finding and retaining talent is difficult or because there are so many tools out there. When we first created Lumen a few years ago the vision was to build a declarative specification for data transformation pipelines and visualizations specifically tailored to build data applications. The motivation was to make it possible to build templatable dashboards and have a specification that we could target to generate dashboards using a no-code UI, i.e. the user would click through a wizard, connect to pre-defined data sources, specify the visualization they wanted, lay them out on a grid and then deploy them.\nThe main selling point that would set this solution apart from other similar solutions was that it should be easy to implement custom data sources, transforms and views (i.e. visualizations, BI indicators, tables etc.). While this approach worked, we found it was actually very, very difficult to build a no-code solution that would be intuitive enough for beginners to use while still allowing for the flexibility that is required in real world scenarios - turns out there’s actually a reason why Tableau and PowerBI are as successful as they are despite them being cumbersome to use. So for about a year we put Lumen on the back burner.\nIn late 2023 the first inkling of the AI/LLM “revolution” (or hype cycle if you prefer) started and we immediately thought of Lumen: “can we teach an LLM to generate a Lumen specification?” Initial attempts at one-shot generation were promising but not particularly convincing. Similarly, early open-source models struggled to provide consistent and high-quality results. Over time as more powerful models were released and an ecosystem of Python libraries for structured output generation emerged, we settled on a basic architecture, which would leverage the existing HoloViz ecosystem for declaring interactive components, the Lumen specification as the structured output format to target, and Instructor for generating that structured output. Since then we have been working on ensuring robustness, extensibility and building an intuitive and powerful UI for performing analyses."
},
{
"objectID": "posts/lumen_ai_announcement/index.html#use-cases",
"href": "posts/lumen_ai_announcement/index.html#use-cases",
"title": "Lumen AI Announcement",
"section": "Use Cases",
"text": "Use Cases\nLumen AI is a generally useful tool that can be adapted and customized for specific use cases by adding custom agents, analyses and tools. However, even without such customization it can be useful for data exploration, writing SQL and generating plots and charts.\n\nLocal Data Exploration\nIn the simple case Lumen AI is an excellent companion for performing some quick data analysis without writing code. E.g. if you want to perform a quick aggregation on some local (or remote) file, just launch Lumen with lumen-ai serve <my-file>.<csv|parquet|xlsx> and generate some SQL queries, download the data, or export the analysis to a notebook to pick it up from there.\n\n\nEnterprise\nThe enterprise use case is what we designed Lumen for. We discovered early on that while LLMs are relatively good at writing SQL and generating simple plots, in most enterprise settings you have a ton of documentation and context about different datasets, you are frequently performing very similar analyses, and generating similar reports and charts. Therefore we envision the real value of Lumen AI to be custom deployments which are specifically tailored to a particular domain. To make this a bit more concrete let’s envision a set up where we configure custom data sources, analyses and tools.\n\nData Sources\nLet’s say the company you are working for has a Snowflake database containing all business operation data. Here at Anaconda Snowflake might hold datasets of conda downloads by package, a list of our customers and their subscriptions, results of marketing campaigns and much more. To make this data available to Lumen AI we might configure a SnowflakeSource, set up OAuth using the existing Panel integrations and configure the OAuth provider with the Snowflake authorization server. This allows us to use the access token to authorize with the Snowflake as the user accessing the app, ensuring that Lumen AI only receives the permissions that are granted to the user accessing the app.\nsource = SnowflakeSource(\n account=...,\n authenticator='token',\n token=pn.state.access_token\n)\n\nlmai.ExplorerUI(source).servable()\n\n\nAnalyses\nAnother avenue for customization is the ability to define custom Analysis classes that the LLM can invoke and that can be automatically suggested to the user based on the current context (i.e. depending on the dataset that is currently loaded). As an example, let’s say the user has asked to see the table of conda downloads with detailed breakdowns per category. We can easily implement a custom Analysis that detects the presence of certain column(s) and suggests to the user that they can generate a report of conda downloads over the past month or year. This way we can automatically empower users to generate custom reports and analysis given the current context and also allow the agents to invoke these directly.\nclass CondaReport(Analysis):\n \"\"\"\n Summarizes conda download statistics.\n \"\"\"\n\n @classmethod\n async def applies(cls, pipeline):\n return 'pkg_name' in pipeline.data and 'counts' in pipeline.data\n\n def __call__(self, pipeline):\n ...\n return Column(\n '# Conda Package Report',\n Markdown(summary),\n ...\n )\n\n\nTools & RAG\nAll companies have vast stores of documentation and metadata but often these are difficult to access and link. Lumen AI makes it easy to integrate custom tools that look up information in different places, e.g. let’s say we want Lumen to be able to link to relevant Jira issues. We can write a simple function:\ndef jira_lookup(topic: str):\n results = jira.search_issues(f'summary ~ \"\\\\[topic\\\\]\"', maxResults=3)\n summary = '\\n'.join(f'{issue.fields.key}: {result.field.summary}' for result in results)\n return summary"
},
{
"objectID": "posts/lumen_ai_announcement/index.html#comparisons",
"href": "posts/lumen_ai_announcement/index.html#comparisons",
"title": "Lumen AI Announcement",
"section": "Comparisons",
"text": "Comparisons\nAgentic AI frameworks are being developed rapidly and standards are only slowly evolving, so building on top of an existing framework would have severely constrained development of Lumen AI. Libraries such as LangGraph, AutoGen, Phi, and CrewAI were just emerging as we were building Lumen AI and to ensure maximum flexibility we decided to begin by building our own architecture and adopt emerging standards as they were widely adopted. There is however another reason that we did not simply build on existing frameworks, and that is because the scope of Lumen AI differs from those other projects, in that the focus of Lumen AI isn’t building a web service but to directly interact with the user, generating rich outputs, having the ability to use forms and other input modalities to interact with the user.\nSpecifically there were a few requirements we had for our Agent (or rather Actor) architecture:\n\nWe wanted to make the actions of our Actor’s as transparent as possible, i.e. the user should be able to inspect, revise and refine the generated outputs and follow along with the reasoning the LLMs used to arrive at those outputs.\nAn Actor should be able to render rich outputs and interact with the user in a richer format that merely chatting back and forth, e.g. ask the user to fill out a form.\nOur Actor’s are solving complex challenges and we wanted to be able to encapsulate complex pipelines that combine user inputs, data transformations, and LLM calls into a single unit.\n\nThese core differences meant that we needed an architecture that would allow us to specify complex prompts, multiple chained prompts, mixed-type outputs, e.g. text and plots, and our agents should be interact with the users using the rich UI features that building on Panel allows. So don’t think of Lumen as yet another agent framework but as a more holistic approach that allows writing agents with rich UI interactions to deliver any kind of output to the user.\nAs we move forward we will adopt more of the standards that have recently emerged, e.g. the model context protocol, and integrate with popular agent frameworks."
},
{
"objectID": "posts/lumen_ai_announcement/index.html#how-does-it-work",
"href": "posts/lumen_ai_announcement/index.html#how-does-it-work",
"title": "Lumen AI Announcement",
"section": "How does it work?",
"text": "How does it work?\nAs anyone even vaguely familiar with LLM trends will know, everything now centers around so called AI agents. We strongly recommend reading Anthropic’s recent blog post on “building effective agents” to unpack what defines an agent and how to build an effective one. What is clear is that the exact definition of “agents” is somewhat up in the air, so let’s take it from the horse’s mouth and ask ChatGPT to define AI agents:\n\nAn AI agent is an autonomous system that perceives its environment, makes decisions using some form of intelligence or reasoning, and takes actions to achieve specified goals.\n\nSo clearly an autonomous system that builds data pipelines, visualizations and dashboards falls under the definition of an agent. In other words our goal in building Lumen AI was to build a so-called agentic system, and since the process of building a data pipeline differs from defining a visualization we would want a set of specialized agents that would co-operate to solve the user’s query. Instead of calling everything an Agent we refer to any component that can perform an action on the user’s (or an LLM’s) behalf an Actor.\n\nActors\nIn our design Actors are given a task and then do one of five things:\n\nThey make a plan of action.\nThey provide context for other Actors.\nThey generate text to either summarize results or provide an indication of progress to the user.\nThey generate visual components rendering a rich representation of some data (e.g. a table, plot or visual indicator).\nThey perform some action, e.g. sending an email, deploying an app, etc.\n\nWe further made distinctions between different kinds of Actors:\n\nCoordinator: As the name indicates this kind of Actor coordinates the actions of other Actors, i.e. given a user query it makes a plan and then executes that plan by invoking other agents and tools.\nAgent: The Agent is then responsible for solving a particular task and responding to a user, e.g. by generating SQL, executing it and rendering the result as a table.\nTool: A Tool can be invoked by other Actors to perform some task, e.g. to provide additional context using a Google query or to email out a report.\n\n\n\n\nLLMs\nWithout some smarts behind it the Actor’s are, of course, nothing. In order to allow our Actors to call almost any kind of LLM we added light-weight wrappers around the most common LLM providers and support for local models. During development and evaluation we primarily used OpenAI’s gpt-4o-mini model due to its ideal mix of speed, performance, and cost–it is also the default. While, Lumen AI supports Mistral and Anthropic models, and plans to eventually include others from Google, Deepseek, etc., these additional options haven’t yet undergone extensive testing.\nIn addition to cloud-based solutions, we’ve included llama-cpp-python support. This enables you to run any GGUF model available on Hugging Face—such as Mistral 7B, Qwen 32B, Gemma 7B, or even Llama 70B—depending on your machine’s memory and GPU. A significant advantage of this setup is that these models run locally, ensuring that your input data never leaves your device. Based on our experience, the Qwen2.5-Coder-7B-Instruct model strikes a good balance between local performance and the quality of output and is therefore set as the default when not using cloud providers.\n\n\nPrompting & Context\nIn order to provide the context needed for each Actor to perform its task well they define a set of prompts. The prompts consist of jinja2.Templates and pydantic Models, which can reference values in the shared memory, e.g. a summary of the current dataset being accessed. Each Actor can also write to the shared memory to make other Actors aware of the current context. In this way the various Actors can collaboratively fill in details required to answer the user’s query and generate the desired outputs.\n\n\nUI\nAll components in Lumen are extensible, including the default ExplorerUI. The ExplorerUI is an entrypoint for Lumen that focuses on the ability to quickly preview some data and then initiate so-called explorations. The goal of the explorations is to either load a table directly or transform it and then optionally generate one or more views to visualize the data in some form. We also wanted a quick way for the user to get an overview of the data, so the default ExplorerUI embeds a GraphicWalker component, building on the newly released panel-graphic-walker extension."
},
{
"objectID": "posts/lumen_ai_announcement/index.html#whats-next",
"href": "posts/lumen_ai_announcement/index.html#whats-next",
"title": "Lumen AI Announcement",
"section": "What’s next?",
"text": "What’s next?\nAll open-source projects are a work in progress and Lumen AI is no exception. We’re releasing it today because we believe it is already useful as a data exploration tool and to quickly explore large datasets. We also know there’s some way to go in a few respects so we want to be fully transparent in what we expect our next steps to be:\n\nBetter Integration with other frameworks: Lumen was developed over a year with rapid progress, iteration and competing standards. As standards are settled upon, e.g. the Model Context Protocol, we want to make sure we support these existing standards and can also leverage tools and agents built using other frameworks.\nImproving the documentation: We know the documentation is currently a little bare-bones. Our main focus over the coming weeks will be to fill the gaps, work up a range of case studies where we create opinionated and highly customized deployments of Lumen AI for specific use cases.\nValidation and Prompt Optimization: We have some internal validation tests for Lumen but they are nowhere close to complete. We will build out the validation suite and then work on a framework for automated prompt optimization with the help of LLMs.\nMore Agents & Tools: So far we have primarily focused on the core functions which includes SQL generation and basic plotting using Vega and hvPlot. As we continue building Lumen AI we want to provide a richer set of agents to perform more complex analyses.\nSmarter Planning: The current Coordinator actors perform one-shot planning, which works well for simple cases, but complex multi-step analyses will require some more refinement, introspection and more.\nEnd-to-End Dashboard Creation: Since Lumen is built on Panel it is relatively easy to take Lumen generated outputs, arrange them on a grid and deploy the resulting dashboard. Currently this requires a multi-step process involving exporting the analysis as a notebook, launching the drag-and-drop interface on a Jupyter server and then deploying the resulting application. We want this capability to be built directly into Lumen so you can build an end-to-end dashboard or report entirely in the Lumen UI."
},
{
"objectID": "posts/lumen_ai_announcement/index.html#try-it-out",
"href": "posts/lumen_ai_announcement/index.html#try-it-out",
"title": "Lumen AI Announcement",
"section": "Try it out!",
"text": "Try it out!\nWe’d love for you to try Lumen out, at its simplest you should be able to run:\npip install lumen[ai]\nand launch the explorer UI from the commandline with:\nlumen-ai serve https://datasets.holoviz.org/windturbines/v1/windturbines.parq --show\nwhich will open the UI!\nWe are really excited to hear your feedback, feature requests and more. So:\n\n⭐ Give us a star on GitHub\n📚 Visit our docs\n👩💻 File a feature request on GitHub\n💬 Chat with us on Discord"
},
{
"objectID": "posts/hvplot_release_0.11/index.html#what-is-hvplot",
"href": "posts/hvplot_release_0.11/index.html#what-is-hvplot",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "What is hvPlot?",
"text": "What is hvPlot?\nhvPlot is an open-source library that offers powerful high-level functionality for data exploration and visualization that doesn’t require you to learn a new API. You can get powerful interactive and compositional Bokeh, Matplotlib, or Plotly plots by simply replacing .plot with .hvplot. hvPlot makes all the analytical power of the HoloViz ecosystem available, using the APIs you already know."
},
{
"objectID": "posts/hvplot_release_0.11/index.html#new-release",
"href": "posts/hvplot_release_0.11/index.html#new-release",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "New release!",
"text": "New release!\nWe are very pleased to announce the 0.11 release of hvPlot! We’ll describe the main changes, including:\n\nNew integration: DuckDB!\nAutomatic latitude/longitude conversion when displaying a tile map\nSupport for displaying subcoordinate y-axis\nNew hover options: hover_tooltips and hover_formatters\nOptimized Pandas index support\nFixing “No output in jupyter”\nUpdate of the minimum version of the dependencies\n\nAs usual the full change log is available on GitHub.\nMany thanks to @Azaya89, @liufeimath and @philipc2 for their first contributions, to @iuryt for contributing again, and to the maintainers @ahuang11, @hoxbro, @maximlt and @philippjfr!\n\nYou can install hvPlot with pip install hvplot, or with conda install hvplot (or conda install conda-forge::hvplot) if you are using Anaconda.\n\n🌟 An easy way to support hvPlot is to give it a star on Github! 🌟"
},
{
"objectID": "posts/hvplot_release_0.11/index.html#new-integration-duckdb",
"href": "posts/hvplot_release_0.11/index.html#new-integration-duckdb",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "New integration: DuckDB!",
"text": "New integration: DuckDB!\nhvPlot has added DuckDB to the long list of libraries it integrates with. Thanks Andrew!\nInstall DuckDB with pip install duckdb or conda install conda-forge::python-duckdb and import hvplot.duckdb to enable the integration. .hvplot() supports DuckDB DuckDBPyRelation and DuckDBConnection objects. In the example below, we create a DuckDB in-memory connection (from a Pandas DataFrame to make it simple) and just plot it with .hvplot.line(...).\n\nimport duckdb\nimport numpy as np\nimport pandas as pd\nimport hvplot.duckdb # noqa \n\ndf_pandas = pd.DataFrame(np.random.randn(1000, 4), columns=list('ABCD')).cumsum()\nconnection = duckdb.connect(':memory:')\nrelation = duckdb.from_df(df_pandas, connection=connection)\nrelation.to_view(\"example_view\");\nrelation.describe()\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n┌─────────┬─────────────────────┬────────────────────┬────────────────────┬────────────────────┐\n│ aggr │ A │ B │ C │ D │\n│ varchar │ double │ double │ double │ double │\n├─────────┼─────────────────────┼────────────────────┼────────────────────┼────────────────────┤\n│ count │ 1000.0 │ 1000.0 │ 1000.0 │ 1000.0 │\n│ mean │ -10.924991575731047 │ 0.8319325462565168 │ 31.451518655996168 │ 12.928098370709069 │\n│ stddev │ 11.870831403141286 │ 9.514072462296758 │ 10.788623199269546 │ 16.225677822936902 │\n│ min │ -38.4522539834605 │ -25.74895137984497 │ 0.9040969837197796 │ -15.6946781879896 │\n│ max │ 9.983881228798655 │ 17.031471683448828 │ 49.7826331906542 │ 38.97909613503948 │\n│ median │ -8.13940182008442 │ 3.0560169324925086 │ 34.386376905116876 │ 15.738919164749506 │\n└─────────┴─────────────────────┴────────────────────┴────────────────────┴────────────────────┘\n\n\n\nrelation.hvplot.line(y=['A', 'B', 'C', 'D'])\n\n\n\n\n\n \n\n\n\n\nDuckDBPyRelation is a bit more optimized because it handles column subsetting directly within DuckDB before the data is converted to a pd.DataFrame. So, it’s a good idea to use the connection.sql() method when possible, which gives you a DuckDBPyRelation, instead of connection.execute(), which returns a DuckDBPyConnection.\n\nsql_expr = \"SELECT * FROM example_view WHERE A > 0 AND B > 0\"\nconnection.sql(sql_expr).hvplot.line(y=['A', 'B'], hover_cols=[\"C\"]) # subsets A, B, C"
},
{
"objectID": "posts/hvplot_release_0.11/index.html#automatic-latitudelongitude-conversion-when-displaying-a-tile-map",
"href": "posts/hvplot_release_0.11/index.html#automatic-latitudelongitude-conversion-when-displaying-a-tile-map",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "Automatic latitude/longitude conversion when displaying a tile map",
"text": "Automatic latitude/longitude conversion when displaying a tile map\nA pretty common situation when dealing with geographic data is to have the data expressed in terms of latitude/longitude (e.g. (52.520008°, 13.404954°) for Berlin), typically GPS coordinates. To display this data on a tile map (think Google Map), it needs to be projected to the Pseudo-Mercator projection that is the de facto standard for Web mapping applications (e.g. (6894701.26m, 1492232.65m)for Berlin). Up until this release, you could perform that projection by:\n\ninstalling GeoViews and setting geo=True, or\nprojecting the data yourself with a utility available in HoloViews (from holoviews.util.transform import lon_lat_to_easting_northing)\n\nWith this release and when you set tiles, hvPlot projects latitude/longitude (EPSG:4326 / WGS84) to easting/northing (EPSG:3857 / Pseudo-Mercator) coordinates without additional package dependencies if it detects that the values falls within expected latitude/longitude ranges. This automatic projection can be disabled with projection=False. Find out more in the Geographic Guide.\n\nimport hvplot.pandas # noqa\nfrom bokeh.sampledata.airport_routes import airports\n\nairports.head(2)\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\nAirportID\nName\nCity\nCountry\nIATA\nICAO\nLatitude\nLongitude\nAltitude\nTimezone\nDST\nTZ\nType\nsource\n\n\n\n\n0\n3411\nBarter Island LRRS Airport\nBarter Island\nUnited States\nBTI\nPABA\n70.134003\n-143.582001\n2\n-9\nA\nAmerica/Anchorage\nairport\nOurAirports\n\n\n1\n3413\nCape Lisburne LRRS Airport\nCape Lisburne\nUnited States\nLUR\nPALU\n68.875099\n-166.110001\n16\n-9\nA\nAmerica/Anchorage\nairport\nOurAirports\n\n\n\n\n\n\n\n\nairports.hvplot.points('Longitude', 'Latitude', tiles=True, color='red', alpha=0.2)"
},
{
"objectID": "posts/hvplot_release_0.11/index.html#support-for-displaying-subcoordinate-y-axis",
"href": "posts/hvplot_release_0.11/index.html#support-for-displaying-subcoordinate-y-axis",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "Support for displaying subcoordinate y-axis",
"text": "Support for displaying subcoordinate y-axis\nhvPlot enables you to create overlays where each element has its own distinct y-axis subcoordinate system (added in HoloViews 0.18.0). To activate this feature that automatically distributes overlay elements along the y-axis, set the subcoordinate_y keyword to True. For example, this feature is particularly useful to analyse multiple timeseries.\n\nimport numpy as np\nimport hvplot.pandas # noqa\nfrom bokeh.sampledata.sea_surface_temperature import sea_surface_temperature as sst\n\nsst = sst.assign(locations=np.random.choice(['loc1', 'loc2', 'loc3', 'loc4'], size=len(sst)))\nsst.head(2)\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\ntemperature\nlocations\n\n\ntime\n\n\n\n\n\n\n2016-02-15 00:00:00+00:00\n4.929\nloc3\n\n\n2016-02-15 00:30:00+00:00\n4.887\nloc4\n\n\n\n\n\n\n\n\nsst.hvplot(by='locations', subcoordinate_y=True)\n\n\n\n\n\n \n\n\n\n\nTry zooming in the plot above, the y-axis wheel-zoom will apply to each curve’s respective sub-coordinate y-axis, rather than the global coordinate frame.\nsubcoordinate_y also accepts a dictionary of related options, for example set subcoordinate_y={'subcoordinate_scale': 2} to increase the scale of each sub-plot, resulting in each curve’s vertical range overlapping 50% with its adjacent elements, which allows creating simple ridge plots. Let us know in this Github issue if you’d be interested in a more extensive API to generate ridge plots.\n\ndf = pd.DataFrame({'value': np.random.randn(200), 'cat': list(\"ABCD\") * 50})\ndf['value'] += df['cat'].map(ord)\ndf.hvplot.kde(by='cat', y='value', subcoordinate_y={'subcoordinate_scale': 1.5}, legend=False, color=\"gray\", hover=False)\n\n\n\n\n\n \n\n\n\n\nMore information about subcoordinate y-axis plots can be found in HoloViews’ customizing plots guide and in its gallery."
},
{
"objectID": "posts/hvplot_release_0.11/index.html#new-hover-options-hover_tooltips-and-hover_formatters",
"href": "posts/hvplot_release_0.11/index.html#new-hover-options-hover_tooltips-and-hover_formatters",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "New hover options: hover_tooltips and hover_formatters",
"text": "New hover options: hover_tooltips and hover_formatters\nThe hover_tooltips and hover_formatters keywords have been added to complement hover and hover_cols. In order to customize the Bokeh hover tool, hvPlot users previously had to import and configure the HoverTool model from Bokeh’s API. With these two new options added in HoloViews 1.19.0, you can now directly customize the hover tool wihout any additional import. Find out more about the values accepted by these options in HoloViews’ Plotting with Bokeh guide.\n\nimport hvplot.pandas # noqa\nfrom bokeh.sampledata.periodic_table import elements\n\nelements.head(2)\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\natomic number\nsymbol\nname\natomic mass\nCPK\nelectronic configuration\nelectronegativity\natomic radius\nion radius\nvan der Waals radius\n...\nEA\nstandard state\nbonding type\nmelting point\nboiling point\ndensity\nmetal\nyear discovered\ngroup\nperiod\n\n\n\n\n0\n1\nH\nHydrogen\n1.00794\n#FFFFFF\n1s1\n2.2\n37.0\nNaN\n120.0\n...\n-73.0\ngas\ndiatomic\n14.0\n20.0\n0.00009\nnonmetal\n1766\n1\n1\n\n\n1\n2\nHe\nHelium\n4.002602\n#D9FFFF\n1s2\nNaN\n32.0\nNaN\n140.0\n...\n0.0\ngas\natomic\nNaN\n4.0\n0.00000\nnoble gas\n1868\n18\n1\n\n\n\n\n2 rows × 21 columns\n\n\n\n\nelements.sort_values('metal').hvplot.points(\n 'electronegativity', 'density', by='metal',\n hover_cols=['name', 'symbol', 'CPK'],\n hover_tooltips=[\n 'name',\n ('Symbol', '@symbol'),\n ('CPK', '$color[hex, swatch]:CPK'),\n ('Density', '@density{%.2e}'),\n ],\n hover_formatters={\n '@{density}': 'printf',\n }\n)"
},
{
"objectID": "posts/hvplot_release_0.11/index.html#optimized-pandas-index-support",
"href": "posts/hvplot_release_0.11/index.html#optimized-pandas-index-support",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "Optimized Pandas index support",
"text": "Optimized Pandas index support\nHoloViews 1.19.0 came with optimizations around how Pandas DataFrame indexes are handled, effectively no longer internally calling .reset_index(), which was affecting memory usage and speed. Following HoloViews, hvPlot’s code base was adapted accordingly, making sure that in most cases .reset_index() is not called. This had the benefit to improve the handling of wide datasets too. No pretty plot for this enhancement 😊 But it’s a change that touched some deeper part of the two code bases so we wanted everyone to be aware of it and report any issues."
},
{
"objectID": "posts/hvplot_release_0.11/index.html#fixing-no-output-in-jupyter",
"href": "posts/hvplot_release_0.11/index.html#fixing-no-output-in-jupyter",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "Fixing “No output in jupyter”",
"text": "Fixing “No output in jupyter”\nAn unfortunately too common issue when using hvPlot in a Jupyter Notebook was that sometimes the plots would not just show up no matter how hard you tried, even with after following the classic procedure: notebook cleaning + notebook saving + browser tab hard refresh 😔 The import hvplot.<integration> import mechanism is a convenient way to allow users to have to avoid running the HoloViews/Panel extensions (e.g. hv.extension('bokeh')). However, since Python imports are cached, only the first import actually embeds the extension JavaScript code, meaning that if you re-run the cell(s) containing import hvplot.pandas (or some other integration) then the JavaScript will no longer be available and on subsequent reloads/re-runs of the notebook plots may not appear.\nIn this release, hvPlot adds an IPython hook which simply deletes the imported modules before every cell execution. This is a big hammer but the best we could find! Don’t hesitate to provide us feedback if you encounter any issue related to this change."
},
{
"objectID": "posts/hvplot_release_0.11/index.html#update-of-the-minimum-version-of-the-dependencies",
"href": "posts/hvplot_release_0.11/index.html#update-of-the-minimum-version-of-the-dependencies",
"title": "Plotting made easy with hvPlot: 0.11 release",
"section": "Update of the minimum version of the dependencies",
"text": "Update of the minimum version of the dependencies\nThis regular maintenance practice had not been done in a while. Most notably, hvPlot now depends on holoviews>=1.19.0 (compared to >=0.11.0 previously) to ensure its users benefits from all the new features HoloViews has made available in the recent years. Additionally, hvPlot 0.11 requires Python 3.9 and above. For more details check the diff of the Pull Request that implemented this change.\n\nJoin us on Github, Discourse or Discord to help us improve hvPlot. Happy plotting 😊"
},
{
"objectID": "posts/hvplot_announcement/index.html",
"href": "posts/hvplot_announcement/index.html",
"title": "hvPlot Announcement",
"section": "",
"text": "A high-level plotting API for the PyData ecosystem - built on HoloViews.\nWe are very pleased to introduce a new visualization tool called hvPlot. hvPlot is closely modeled on the Pandas and Xarray .plot APIs, but returns HoloViews objects that display as fully interactive Bokeh-based plots. hvPlot is significantly more powerful than other .plot API tools that have recently become available, because it lets you use data from a wide array of libraries in the PyData ecosystem:"
},
{
"objectID": "posts/hvplot_announcement/index.html#try-it-out",
"href": "posts/hvplot_announcement/index.html#try-it-out",
"title": "hvPlot Announcement",
"section": "Try it out",
"text": "Try it out\nWe hope you’ll give hvPlot a try and it makes your visualization workflows a little bit easier and more interactive. Let us know how it goes and don’t hesitate to file issues or make suggestions for improvements for the library. To get started, follow the installation instructions below and visit the website. Also check out pyviz.org for information about the other PyViz libraries, all of which work well with hvPlot.\n\nInstallation\nhvPlot supports Python 2.7, 3.5, 3.6 and 3.7 on Linux, Windows, or Mac and can be installed with conda:\nconda install -c pyviz hvplot\nor with pip:\npip install hvplot\nFor JupyterLab support, the jupyterlab_pyviz extension is also required::\njupyter labextension install @pyviz/jupyterlab_pyviz\n\n\nAcknowledgements\nhvPlot was built with the support of Anaconda Inc.. Special thanks to all the contributors:\n\nPhilipp Rudiger (@philippjfr)\nJulia Signell (@jsignell)\nJames A. Bednar (@jbednar)\nAndrew Huang (@ahuang11)\nJean-Luc Stevens (@jlstevens)"
},
{
"objectID": "posts/hv_release_1.13/index.html",
"href": "posts/hv_release_1.13/index.html",
"title": "HoloViews 1.13 Release",
"section": "",
"text": "We are very pleased to announce the release of HoloViews 1.13.x!\nSince we did not release blog posts for other 1.13 we will use this opportunity the many great features that have been added in this release. Note that this post primarily focuses on exciting new functionality for a full summary of all features, enhancements and bug fixes see the releases page in the HoloViews documentation.\nMajor features:\n\nAdd link_selection function to make custom linked brushing simple (#3951)\nlink_selection builds on new support for much more powerful data-transform pipelines: new Dataset.transform method (#237, #3932), dim expressions in Dataset.select (#3920), arbitrary method calls on dim expressions (#4080), and Dataset.pipeline and Dataset.dataset properties to track provenance of data\nAdd Annotators to allow easily drawing, editing, and annotating visual elements (#1185)\nCompletely replaced custom Javascript widgets with Panel-based widgets allowing for customizable layout (#84, #805)\nAdd HSpan, VSpan, Slope, Segments and Rectangles elements (#3510, #3532, #4000)\nAdd support for cuDF GPU dataframes, cuPy backed xarrays, and GPU datashading (#3982)\nAdd spatialpandas support and redesigned geometry interfaces for consistent roundtripping (#4120)\nAdd explicit .df and .xr namespaces to dim expressions to allow using dataframe and xarray APIs (#4320)\n\nOther Features:\n\nSupport GIF rendering with Bokeh and Plotly backends (#2956, #4017)\nSupport for Plotly Bars, Bounds, Box, Ellipse, HLine, HSpan, Histogram, RGB, VLine and VSpan plots\nAdd support for linked streams in Plotly backend to enable rich interactivity (#3880, #3912)\nSupport for datashading Area, Spikes, Segments and Polygons (#4120)\nHeatMap now supports mixed categorical/numeric axes (#2128)\nUse __signature__ to generate .opts tab completions (#4193)\n\n\nIf you are using Anaconda, HoloViews can most easily be installed by executing the command conda install -c pyviz holoviews . Otherwise, use pip install holoviews.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n \n \n\n\n\n\nLinked brushing\nDatasets very often have more dimensions than can be shown in a single plot, which is why HoloViews offers so many ways to show the data from each of these dimensions at once (via layouts, overlays, grids, holomaps, etc.). However, even once the data has been displayed, it can be difficult to relate data points between the various plots that are laid out together. For instance, “is the outlier I can see in this x,y plot the same datapoint that stands out in this w,z plot”? “Are the datapoints with high x values in this plot also the ones with high w values in this other plot?” Since points are not usually visibly connected between plots, answering such questions can be difficult and tedious, making it difficult to understand multidimensional datasets. Linked brushing (also called “brushing and linking”) offers an easy way to understand how data points and groups of them relate across different plots. Here “brushing” refers to selecting data points or ranges in one plot, with “linking” then highlighting those same points or ranges in other plots derived from the same data.\nIn HoloViews 1.13.x Jon Mease and Philipp Rudiger worked hard on providing a simple way to expose this functionality in HoloViews by leveraging many of the existing features in HoloViews. The entry point for using this functionality is the link_selections function which automatically creates views for the selected and unselected data and indicators for the current selection.\nBelow we will create a number of plots from Gorman et al.’s penguin dataset and then applies the linked_selections function:\n\ncolor_dim = hv.dim('Species').categorize({\n 'Adelie Penguin': '#1f77b4',\n 'Gentoo penguin': '#ff7f0e',\n 'Chinstrap penguin': '#2ca02c'\n})\n\nscatter = hv.Scatter(penguin_ds, 'Culmen Length (mm)', ['Culmen Depth (mm)', 'Species']).opts(\n color=color_dim, tools=['hover']\n)\nbars = hv.Bars(penguin_ds, 'Species', 'Individual ID').aggregate(function=np.count_nonzero).opts(\n xrotation=45, color=color_dim\n)\nhist = penguin_ds.hist('Body Mass (g)', groupby='Species', adjoin=False, normed=False).opts(\n hv.opts.Histogram(show_legend=False, fill_color=color_dim)\n)\nviolin = hv.Violin(penguin_ds, ['Species', 'Sex'], 'Flipper Length (mm)').opts(\n split='Sex', xrotation=45, show_legend=True, legend_position='right', frame_width=240,\n cmap='Category20'\n)\n\nhv.link_selections(scatter+hist+bars+violin, selection_mode='union').cols(2);\n\n\nAs we can see the linked selections functionality allows us to link a variety of plot types together and cross-filter on them using both box-select and lasso-select tools. However the real power behind the linked selections support is the fact that it allows us to select on the raw data and automatically replays complex pipelines of operations, e.g. below is a dashboard built in just a few lines of Python code that generates histograms and datashaded plots of 11 million Taxi trips and then links them automatically. In this way we can gain insights into large and complex datasets, e.g. identifying where Taxi trips departing at Newark airport in NYC drop off their passengers:\n\n\nTo read more about linked brushing see the corresponding user guide.\n\n\nGPU support\nThe Rapids initiative started by NVIDIA has made huge strides over the last couple of years and in particular the cuDF library has brought a GPU backed DataFrame API to the PyData ecosystem. Since the cuDF and cupy libraries are now mature enough we developed a cuDF interface for HoloViews. You can now pass a cuDF DataFrame directly to HoloViews and it will leverage the huge performance gains when computing aggregates, ranges, histograms and thanks to the work of the folks at NVIDIA and Jon Mease you can now directly leverage GPU accelerated Datashader to interactively explore huge datasets with amazing latency, e.g. using the NYC taxi datasets you can easily achieve 10x performance improvements when computing histograms and datashaded plots further speeding up the dashboard presented above without changing a single line of HoloViews code - a cuDF behaves as a drop-in replacement for a Pandas or Dask dataframe as far as HoloViews is concerned.\n\n\nData pipelines\nHoloViews has for a long time to declare pipelines of operations to apply to some visualization. However if the transform involved some complex manipulation of the underlying data we would have to manually unpack the data, transform it some way and then create a new element to display it. This meant that it was hard to leverage the fact that HoloViews is agnostic about the data format, in many cases users would either have to know about the type of the data or access it as a NumPy array, which can leave performance on the table or cause unnecessary memory copies. Therefore we added an API to easily transform data, which also supports the dynamic nature of the existing .apply API.\nTo demonstrate this new feature we will load an xarray dataset of air temperatures:\n\nair_temp = xr.tutorial.load_dataset('air_temperature')\nair_temp\n\nq = pn.widgets.FloatSlider(name='quantile')\n\nquantile_expr = hv.dim('air').xr.quantile(q, dim='time')\nquantile_expr\n\n\n\n\nShow/Hide data repr\n\n\n\n\n\nShow/Hide attributes\n\n\n\n\n\n\n\nxarray.DatasetDimensions:lat: 25lon: 53time: 2920Coordinates: (3)lat(lat)float3275.0 72.5 70.0 ... 20.0 17.5 15.0standard_name :latitudelong_name :Latitudeunits :degrees_northaxis :Yarray([75. , 72.5, 70. , 67.5, 65. , 62.5, 60. , 57.5, 55. , 52.5, 50. , 47.5,\n 45. , 42.5, 40. , 37.5, 35. , 32.5, 30. , 27.5, 25. , 22.5, 20. , 17.5,\n 15. ], dtype=float32)lon(lon)float32200.0 202.5 205.0 ... 327.5 330.0standard_name :longitudelong_name :Longitudeunits :degrees_eastaxis :Xarray([200. , 202.5, 205. , 207.5, 210. , 212.5, 215. , 217.5, 220. , 222.5,\n 225. , 227.5, 230. , 232.5, 235. , 237.5, 240. , 242.5, 245. , 247.5,\n 250. , 252.5, 255. , 257.5, 260. , 262.5, 265. , 267.5, 270. , 272.5,\n 275. , 277.5, 280. , 282.5, 285. , 287.5, 290. , 292.5, 295. , 297.5,\n 300. , 302.5, 305. , 307.5, 310. , 312.5, 315. , 317.5, 320. , 322.5,\n 325. , 327.5, 330. ], dtype=float32)time(time)datetime64[ns]2013-01-01 ... 2014-12-31T18:00:00standard_name :timelong_name :Timearray(['2013-01-01T00:00:00.000000000', '2013-01-01T06:00:00.000000000',\n '2013-01-01T12:00:00.000000000', ..., '2014-12-31T06:00:00.000000000',\n '2014-12-31T12:00:00.000000000', '2014-12-31T18:00:00.000000000'],\n dtype='datetime64[ns]')Data variables: (1)air(time, lat, lon)float32241.2 242.5 243.5 ... 296.19 295.69long_name :4xDaily Air temperature at sigma level 995units :degKprecision :2GRIB_id :11GRIB_name :TMPvar_desc :Air temperaturedataset :NMC Reanalysislevel_desc :Surfacestatistic :Individual Obsparent_stat :Otheractual_range :[185.16 322.1 ]array([[[241.2 , 242.5 , 243.5 , ..., 232.79999, 235.5 ,\n 238.59999],\n [243.79999, 244.5 , 244.7 , ..., 232.79999, 235.29999,\n 239.29999],\n [250. , 249.79999, 248.89 , ..., 233.2 , 236.39 ,\n 241.7 ],\n ...,\n [296.6 , 296.19998, 296.4 , ..., 295.4 , 295.1 ,\n 294.69998],\n [295.9 , 296.19998, 296.79 , ..., 295.9 , 295.9 ,\n 295.19998],\n [296.29 , 296.79 , 297.1 , ..., 296.9 , 296.79 ,\n 296.6 ]],\n\n [[242.09999, 242.7 , 243.09999, ..., 232. , 233.59999,\n 235.79999],\n [243.59999, 244.09999, 244.2 , ..., 231. , 232.5 ,\n 235.7 ],\n [253.2 , 252.89 , 252.09999, ..., 230.79999, 233.39 ,\n 238.5 ],\n ...,\n [296.4 , 295.9 , 296.19998, ..., 295.4 , 295.1 ,\n 294.79 ],\n [296.19998, 296.69998, 296.79 , ..., 295.6 , 295.5 ,\n 295.1 ],\n [296.29 , 297.19998, 297.4 , ..., 296.4 , 296.4 ,\n 296.6 ]],\n\n [[242.29999, 242.2 , 242.29999, ..., 234.29999, 236.09999,\n 238.7 ],\n [244.59999, 244.39 , 244. , ..., 230.29999, 232. ,\n 235.7 ],\n [256.19998, 255.5 , 254.2 , ..., 231.2 , 233.2 ,\n 238.2 ],\n ...,\n [295.6 , 295.4 , 295.4 , ..., 296.29 , 295.29 ,\n 295. ],\n [296.19998, 296.5 , 296.29 , ..., 296.4 , 296. ,\n 295.6 ],\n [296.4 , 296.29 , 296.4 , ..., 297. , 297. ,\n 296.79 ]],\n\n ...,\n\n [[243.48999, 242.98999, 242.09 , ..., 244.18999, 244.48999,\n 244.89 ],\n [249.09 , 248.98999, 248.59 , ..., 240.59 , 241.29 ,\n 242.68999],\n [262.69 , 262.19 , 261.69 , ..., 239.39 , 241.68999,\n 245.18999],\n ...,\n [294.79 , 295.29 , 297.49 , ..., 295.49 , 295.38998,\n 294.69 ],\n [296.79 , 297.88998, 298.29 , ..., 295.49 , 295.49 ,\n 294.79 ],\n [298.19 , 299.19 , 298.79 , ..., 296.09 , 295.79 ,\n 295.79 ]],\n\n [[245.79 , 244.79 , 243.48999, ..., 243.29 , 243.98999,\n 244.79 ],\n [249.89 , 249.29 , 248.48999, ..., 241.29 , 242.48999,\n 244.29 ],\n [262.38998, 261.79 , 261.29 , ..., 240.48999, 243.09 ,\n 246.89 ],\n ...,\n [293.69 , 293.88998, 295.38998, ..., 295.09 , 294.69 ,\n 294.29 ],\n [296.29 , 297.19 , 297.59 , ..., 295.29 , 295.09 ,\n 294.38998],\n [297.79 , 298.38998, 298.49 , ..., 295.69 , 295.49 ,\n 295.19 ]],\n\n [[245.09 , 244.29 , 243.29 , ..., 241.68999, 241.48999,\n 241.79 ],\n [249.89 , 249.29 , 248.39 , ..., 239.59 , 240.29 ,\n 241.68999],\n [262.99 , 262.19 , 261.38998, ..., 239.89 , 242.59 ,\n 246.29 ],\n ...,\n [293.79 , 293.69 , 295.09 , ..., 295.29 , 295.09 ,\n 294.69 ],\n [296.09 , 296.88998, 297.19 , ..., 295.69 , 295.69 ,\n 295.19 ],\n [297.69 , 298.09 , 298.09 , ..., 296.49 , 296.19 ,\n 295.69 ]]], dtype=float32)Attributes: (5)Conventions :COARDStitle :4x daily NMC reanalysis (1948)description :Data is from NMC initialized reanalysis\n(4x/day). These are the 0.9950 sigma level values.platform :Modelreferences :http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.html\n\n\nSeeing that this dataset has an 'air' variable we can write a so called dim expression to express a transform which performs a quantile quantile aggregation along the 'time' dimension:\n\nq = pn.widgets.FloatSlider(name='quantile')\n\nquantile_expr = hv.dim('air').xr.quantile(q, dim='time')\nquantile_expr\n\ndim('air').xr.quantile(FloatSlider(name='quantile'), dim='time')\n\n\nAs you can see the slider we have created is a valid argument to this transform and if we now apply this transform the pipeline is reevaulated whenever the slider value changes:\n\ntemp_ds = hv.Dataset(air_temp, ['lon', 'lat'])\n\ntransformed = temp_ds.apply.transform(air=quantile_expr).apply(hv.Image)\n\npn.Column(q, transformed.opts(colorbar=True, width=400))\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nIn this way we can build transformation pipelines using familiar APIs (pandas or xarray) without losing the ability to inject dynamic parameters driven by widgets or other sources. To read more about data pipelines see the Transforming Elements and Data Processing Pipelines user guides.\n\n\nAnnotators\nThis release also introduced annotating functionality which allows editing, adding and labelling different a range of element types. At the moment it is possible to annotate the following element types:\n\nPoints/Scatter\nCurve\nRectangles\nPath\nPolygons\n\nAs an example we will create a set of Points and use the annotate function to enable the annotator functionality:\n\ncells = hv.Image(calcium_array[:, :, 0])\n\npoints = hv.Points([(-0.275, -0.0871875), (-0.2275, -0.1996875), (0.1575, 0.0003125)]).opts(\n padding=0, aspect='square', frame_width=400, responsive=False, active_tools=['point_draw']\n)\n\nannotator = hv.annotate.instance()\n\nhv.annotate.compose(cells, annotator(points, name='Cell Annotator', annotations={'Label': str}))\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nIf you select the PointDraw tool from the toolbar you will now be able to add new points, drag existing points around and edit their position and labels via the table. Once we are done we can access the edited data on the annotator object:\n\nannotator.annotated.dframe()\n\n\n\n\n\n\n\n\nx\ny\nLabel\n\n\n\n\n0\n-0.2750\n-0.087188\n\n\n\n1\n-0.2275\n-0.199687\n\n\n\n2\n0.1575\n0.000313\n\n\n\n\n\n\n\n\n\n\nNew elements\nThe addition of new visual elements always increases the power of a plotting library significantly. In this a number of elements were added to draw specific geometries and annotate plots.\n\nRectangles & Segments\nThe ability to draw rectangles and segments provides powerful low-level primitives to render higher-level plots, e.g. below we can see an OHLC plot, usually used to indicate the movement of stocks over time, generated using the new Rectangles and Segments elements:\n\ndef OHLC(N):\n xs = np.arange(N)\n ys = np.random.randn(N+1).cumsum()\n\n O = ys[1:]\n C = ys[:-1]\n H = np.max([O, C], axis=0) + np.random.rand(N)\n L = np.min([O, C], axis=0) - np.random.rand(N)\n return (xs, ys, O, H, L, C)\n\nxs, ys, O, H, L, C = OHLC(50)\nboxes = hv.Rectangles((xs-0.25, O, xs+0.25, C))\nsegments = hv.Segments((xs, L, xs, H))\n\n# Color boxes where price decreased red and where price increased green\ncolor_exp = (hv.dim('y0')>hv.dim('y1')).categorize({True: 'green', False: 'red'})\n\nboxes.opts(width=1000, color=color_exp, xlabel='Time', ylabel='Price') * segments.opts(color='black')\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nHSpan and VSpan\nThe ability to draw shaded regions with unlimited extent allows highlighting notable regions along the x- or y-axis of a plot. The new HSpan and VSpan annotation elements allow you to do exactly that, here we mark the regions of the timeseries that are one standard deviation above and below the mean:\n\nys = np.random.randn(1000).cumsum()\n\nymean, ystd, ymin, ymax = ys.mean(), ys.std(), ys.min(), ys.max()\n\ntimeseries = hv.Curve(ys)\n\ntimeseries * hv.HSpan(ymean+ystd, ymax) * hv.HSpan(ymean-ystd, ymin)\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nSlope\nAnother helpful annotation is the ability to draw an infinite sloping line on a plot, complementing the existing HLine and VLine elements. The Slope element can be used to display a regression line for example:\n\nscatter = penguin_ds.to(hv.Scatter, 'Body Mass (g)', 'Flipper Length (mm)', 'Species').overlay()\n\nscatter * scatter.apply(hv.Slope.from_scatter, per_element=True).opts(legend_position='bottom_right', frame_width=400)\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\nPanel based widgets\nHoloViews has shipped with a set of widgets to explore multi-dimensional parameter spaces since its first public release. These widgets were written as a weekend project and did not follow many of the best practices of Javascript development. This meant they were hard to extend, exhibit a variety of issues related to character encoding and were not at all customizable. In HoloViews 1.13.0 we completely replaced most of the rendering machinery and widget code with Panel widgets making them easier to maintain, customize and extend.\nSpecifically so far the widgets have always been located at the right of a plot, but now we have full flexibility to override this:\n\ncalcium_hmap = hv.HoloMap({i: hv.Image(calcium_array[:, :, i]) for i in range(10)}, 'Time')\n\nhv.output(calcium_hmap, widget_location='bottom')\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nSpatialpandas and polygon datashading\nHoloViews has long had strong support for both gridded and tabular data, with geometry data support being more spotty. In HoloViews 1.13.0 the core model around support for geometry data was redesigned from the ground up, in particular HoloViews can now convert natively between different geometry storage backends including the native dictionary format, geopandas (if GeoViews is installed) and the new addition called spatialpandas. Spatialpandas is closely modeled on GeoPandas but does not have the same heavy GIS dependencies and is highly optimized, efficiently make use of pandas extension arrays. All of this means that spatialpandas is significantly more performant than geopandas and can also be directly ingested into datashader, making it possible to render thousands or even millions of geometries, including polygons, very quickly.\n\nnyc_buildings = hv.Polygons(buildings, ['x', 'y'], 'type')\n\ndatashade(nyc_buildings, aggregator=ds.by('type', ds.count()), color_key=glasbey);\n\n\n\n\nIn addition to Polygons this release also brings support for datashading a range of other plot types including Area, Spikes and Segments:\n\nxs, ys, O, H, L, C = OHLC(1000000)\n\narea = hv.Area((xs, O))\n\nsegments = hv.Segments((xs, L, xs, H))\n\n(datashade(area, aggregator='any') + datashade(segments) + datashade(hv.Spikes(O))).opts(shared_axes=False)\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\nImproved Plotly support\nThe Plotly backend has long been only partially supported with a wide swath of element types not being implemented. This release brought feature parity between bokeh and plotly backends much closer by implementing a wide range of plot types including:\n\nBars\nBounds\nBox\nEllipse\nHLine/VLine\nHSpan/VSpan\nHistogram\nRGB\n\nBelow we can see examples of each of the element types:\n\n(bars + hist + path + rgb + hspan + shapes).opts(shared_axes=False).cols(2)\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nAdditionally the Plotly backend now supports interactivity with support linked streams allowing for deep interactivity, e.g. linked brushing is also supported:\n\n\n\nGIF support for Bokeh and Plotly\nIt has long been possible to generate GIFs with HoloViews using the Matplotlib backend. In this release however we have finally extended that support to both the Bokeh and Plotly backends, e.g. here we create a GIF zooming in on the Empire State building in the building dataset:\n\nempire_state_loc = -73.9857, 40.7484\n\ndef nyc_zoom(zoom):\n x, y = empire_state_loc\n width = (0.05-0.005*zoom)\n return datashade(nyc_buildings, aggregator=ds.by('type', ds.any()), color_key=glasbey[::-1],\n x_range=(x-width, x+width), y_range=(y-width, y+width), dynamic=False, min_alpha=0)\n\nhmap = hv.HoloMap({i: nyc_zoom(i) for i in range(10)}).opts(\n xaxis=None, yaxis=None, title='', toolbar=None, framewise=True,\n width=600, height=600, show_frame=False, backend='bokeh'\n)\n\nhv.output(hmap, holomap='gif', backend='bokeh', fps=2)\n\n\n\n\n\n\nWhat’s next?\nIn the coming months we will finally be focusing on a HoloViews 2.0 release where the main aims are:\n\nSplitting plotting and data components into separate packages\nAPI cleanup\nMore consistent styling between backends\n\nAdditionally we are continuing to work on some exciting features:\n\nFurther work on the Plotly backend making it a more equal citizen in the HoloViz ecosystem\nAdditions of new data interfaces including Vaex and Ibis\nBetter support for Pandas multi-indexes\n\n\n\n\n\n Back to top"
},
{
"objectID": "posts/hugging_face_template/index.html",
"href": "posts/hugging_face_template/index.html",
"title": "Building an interactive ML dashboard in Panel",
"section": "",
"text": "Demo of the image classification app.\nHoloViz Panel is a versatile Python library that empowers developers and data scientists to build interactive visualizations with ease. Whether you’re working on machine learning projects, developing web applications, or designing data dashboards, Panel provides a powerful set of tools and features to enhance your data exploration and presentation capabilities. In this blog post, we will delve into the exciting features of HoloViz Panel, explore how it can revolutionize your data visualization workflows, and demonstrate how you can make an app like this using about 100 lines of code.\nTry out the app and check out the code:"
},
{
"objectID": "posts/hugging_face_template/index.html#harnessing-the-power-of-mlai",
"href": "posts/hugging_face_template/index.html#harnessing-the-power-of-mlai",
"title": "Building an interactive ML dashboard in Panel",
"section": "Harnessing the Power of ML/AI",
"text": "Harnessing the Power of ML/AI\nML/AI has become an integral part of data analysis and decision-making processes. With Panel, you can seamlessly integrate ML models and results into your visualizations. In this blog post, we will explore how to make an image classification task using the OpenAI CLIP model.\nCLIP is pretrained on a large dataset of image-text pairs, enabling it to understand images and corresponding textual descriptions and work for various downstream tasks such as image classification.\nThere are two ML-related functions we used to perform the image classification task. The first function load_processor_model enables us to load a pre-trained CLIP model from Hugging Face. The second function get_similarity_score calculates the degree of similarity between the image and a provided list of class labels.\[email protected]\ndef load_processor_model(\n processor_name: str, model_name: str\n) -> Tuple[CLIPProcessor, CLIPModel]:\n processor = CLIPProcessor.from_pretrained(processor_name)\n model = CLIPModel.from_pretrained(model_name)\n return processor, model\n\ndef get_similarity_scores(class_items: List[str], image: Image) -> List[float]:\n processor, model = load_processor_model(\n \"openai/clip-vit-base-patch32\", \"openai/clip-vit-base-patch32\"\n )\n inputs = processor(\n text=class_items,\n images=[image],\n return_tensors=\"pt\", # pytorch tensors\n )\n outputs = model(**inputs)\n logits_per_image = outputs.logits_per_image\n class_likelihoods = logits_per_image.softmax(dim=1).detach().numpy()\n return class_likelihoods[0]"
},
{
"objectID": "posts/hugging_face_template/index.html#binding-widgets-for-interactivity",
"href": "posts/hugging_face_template/index.html#binding-widgets-for-interactivity",
"title": "Building an interactive ML dashboard in Panel",
"section": "Binding Widgets for Interactivity",
"text": "Binding Widgets for Interactivity\nOne of the key strengths of Panel is its ability to bind widgets to functions. This functionality provides an intuitive interface for users to manipulate the underlying data and gain deeper insights through interaction.\n\nPython Function\nIn our example, we have a process_input function, which formats the similarity score we get from the image classification model to a Panel object with a good-looking UI. The actual function utilizes async; if you’re unfamiliar with async, don’t worry! We will explain it in a later section, but note async is not a requirement of using Panel–Panel simply supports it!\nasync def process_inputs(class_names: List[str], image_url: str):\n \"\"\"\n High level function that takes in the user inputs and returns the\n classification results as panel objects.\n \"\"\"\n ...\n yield results\n\n\nPanel Widgets\nThere are two widgets that we use to interact with this function.\n\nimage_url is a TextInput widget, which allows entering any string as the image URL.\nclass_names is another TextInput widget, which accepts possible class names for the model to classify.\n\nimage_url = pn.widgets.TextInput(\n name=\"Image URL to classify\",\n value=pn.bind(random_url, randomize_url),\n)\nclass_names = pn.widgets.TextInput(\n name=\"Comma separated class names\",\n placeholder=\"Enter possible class names, e.g. cat, dog\",\n value=\"cat, dog, parrot\",\n)\n\n\nBinding Widgets to Function\nBased on the process_inputs function signature, it accepts two parameters: class_names and image_url. We can bind each arg/kwarg to a widget using pn.bind like this:\ninteractive_result = pn.panel(\n pn.bind(process_inputs, image_url=image_url, class_names=class_names),\n height=600,\n)\n\nThe first positional argument is the function name.\nThe keyword arguments after match the function’s signature, and thus the widgets’ values are bound to the function’s keyword arguments.\n\nTo clarify, if the widget was named image_url_input instead of image_url, then the call would be:\npn.bind(process_inputs, image_url=image_url_input, ...)"
},
{
"objectID": "posts/hugging_face_template/index.html#adding-template-design-styling",
"href": "posts/hugging_face_template/index.html#adding-template-design-styling",
"title": "Building an interactive ML dashboard in Panel",
"section": "Adding Template Design Styling",
"text": "Adding Template Design Styling\nThe aesthetics of your applications and dashboards play a critical role in engaging your audience. Panel enables you to add styling based off popular designs like Material or Fast to your visualizations, allowing you to create visually appealing and professional-looking interfaces.\nIn this example, we used a bootstrap template, where we can control what we’d like to show in multiple areas such as title and main, and we can specify sizes and colors for various components:\npn.extension(design=\"bootstrap\", sizing_mode=\"stretch_width\")\nWe also set the Progress bar design to Material.\nrow_bar = pn.indicators.Progress(\n ...\n design=pn.theme.Material,\n)\nNote, you can use styles and stylesheets too!"
},
{
"objectID": "posts/hugging_face_template/index.html#caching-for-expensive-tasks",
"href": "posts/hugging_face_template/index.html#caching-for-expensive-tasks",
"title": "Building an interactive ML dashboard in Panel",
"section": "Caching for Expensive Tasks",
"text": "Caching for Expensive Tasks\nSome data processing tasks can be computationally expensive, causing sluggish performance. Panel offers caching mechanisms that allow you to store the results of expensive computations and reuse them when needed, significantly improving the responsiveness of your applications.\nIn our example, we cached the output of the load_processor_model using the pn.cache decorator. This means that we don’t need to download and load the model multiple times. This step will make your app feel much more responsive!\nAdditional note: for further responsiveness, there’s defer_loading and loading indicators.\[email protected]\ndef load_processor_model(\n processor_name: str, model_name: str\n) -> Tuple[CLIPProcessor, CLIPModel]:\n processor = CLIPProcessor.from_pretrained(processor_name)\n model = CLIPModel.from_pretrained(model_name)\n return processor, model"
},
{
"objectID": "posts/hugging_face_template/index.html#bridging-functionality-with-javascript",
"href": "posts/hugging_face_template/index.html#bridging-functionality-with-javascript",
"title": "Building an interactive ML dashboard in Panel",
"section": "Bridging Functionality with JavaScript",
"text": "Bridging Functionality with JavaScript\nWhile Panel provides a rich set of interactive features, you may occasionally require additional functionality that can be achieved through JavaScript. It’s easy to integrate JavaScript code with Panel visualizations to extend their capabilities. By bridging the gap between Python and JavaScript, you can create advanced visualizations and add interactive elements that go beyond the scope of Panel’s native functionality.\nAt the bottom of our app, you might have observed a collection of icons representing Panel’s social media accounts, including LinkedIn and Twitter. When you click on any of these icons, you will be automatically redirected to the respective social media profiles. This seamless click and redirect functionality is made possible through Panel’s JavaScript integration with the js_on_click method:\nfooter_row = pn.Row(pn.Spacer(), align=\"center\")\nfor icon, url in ICON_URLS.items():\n href_button = pn.widgets.Button(icon=icon, width=35, height=35)\n href_button.js_on_click(code=f\"window.open('{url}')\")\n footer_row.append(href_button)\nfooter_row.append(pn.Spacer())"
},
{
"objectID": "posts/hugging_face_template/index.html#understanding-sync-vs.-async-support",
"href": "posts/hugging_face_template/index.html#understanding-sync-vs.-async-support",
"title": "Building an interactive ML dashboard in Panel",
"section": "Understanding Sync vs. Async Support",
"text": "Understanding Sync vs. Async Support\nAsynchronous programming has gained popularity due to its ability to handle concurrent tasks efficiently. We’ll discuss the differences between synchronous and asynchronous execution and explore Panel’s support for asynchronous operations. Understanding these concepts will enable you to leverage async capabilities within Panel, providing enhanced performance and responsiveness in your applications.\nUsing async to your function allows collaborative multitasking within a single thread and allows IO tasks to happen in the background. For example, when we fetch a random image to the internet, we don’t know how long we’d need to wait and we don’t want to stop our program while waiting. Async enables concurrent execution, allowing us to perform other tasks while waiting and ensuring a responsive application. Be sure to add the corresponding awaits too.\nasync def open_image_url(image_url: str) -> Image:\n async with aiohttp.ClientSession() as session:\n async with session.get(image_url) as resp:\n return Image.open(io.BytesIO(await resp.read()))\nIf you are unfamiliar with async, it’s also possible to rewrite this in sync too! async is not a requirement of using Panel!\ndef open_image_url(image_url: str) -> Image:\n with requests.get(image_url) as resp:\n return Image.open(io.BytesIO(resp.read()))"
},
{
"objectID": "posts/hugging_face_template/index.html#other-ideas-to-try",
"href": "posts/hugging_face_template/index.html#other-ideas-to-try",
"title": "Building an interactive ML dashboard in Panel",
"section": "Other Ideas to Try",
"text": "Other Ideas to Try\nHere we only explored one idea; there’s so much more you can try:\n\nInteractive Text Generation: Utilize Hugging Face’s powerful language models, such as GPT or Transformer, to generate interactive text. Combine Panel’s widget binding capabilities with Hugging Face models to create dynamic interfaces where users can input prompts or tweak parameters to generate custom text outputs.\nSentiment Analysis and Text Classification: Build interactive dashboards using Hugging Face’s pre-trained sentiment analysis or text classification models. With Panel, users can input text samples, visualize predicted sentiment or class probabilities, and explore model predictions through interactive visualizations.\nLanguage Translation: Leverage Hugging Face’s translation models to create interactive language translation interfaces. With Panel, users can input text in one language and visualize the translated output, allowing for easy experimentation and exploration of translation quality.\nNamed Entity Recognition (NER): Combine Hugging Face’s NER models with Panel to build interactive NER visualizations. Users can input text and visualize identified entities, highlight entity spans, and explore model predictions through an intuitive interface.\nChatbots and Conversational AI: With Hugging Face’s conversational models, you can create interactive chatbots or conversational agents. Panel enables users to have interactive conversations with the chatbot, visualize responses, and customize the chatbot’s behavior through interactive widgets.\nModel Fine-tuning and Evaluation: Use Panel to create interactive interfaces for fine-tuning and evaluating Hugging Face models. Users can input custom training data, adjust hyperparameters, visualize training progress, and evaluate model performance through interactive visualizations.\nModel Comparison and Benchmarking: Build interactive interfaces with Panel to compare and benchmark different Hugging Face models for specific NLP tasks. Users can input sample inputs, compare model predictions, visualize performance metrics, and explore trade-offs between different models.\n\nCheck out our app gallery for other ideas! Happy experimenting!"
},
{
"objectID": "posts/hugging_face_template/index.html#join-our-community",
"href": "posts/hugging_face_template/index.html#join-our-community",
"title": "Building an interactive ML dashboard in Panel",
"section": "Join Our Community",
"text": "Join Our Community\nThe Panel community is vibrant and supportive, with experienced developers and data scientists eager to help and share their knowledge. Join us and connect with us:\n\nDiscord\nDiscourse\nTwitter\nLinkedIn\nGithub"
},
{
"objectID": "posts/gv_release_1.5/index.html",
"href": "posts/gv_release_1.5/index.html",
"title": "GeoViews 1.5 Release",
"section": "",
"text": "We are very pleased to announce the release of GeoViews 1.5!\nThis release contains a large number of features and improvements. Some highlights include:\nMajor feature:\nNew components:\nNew features:\nEnhancements:\nPlus many other bug fixes, enhancements and documentation improvements. For full details, see the Release Notes.\nIf you are using Anaconda, GeoViews can most easily be installed by executing the command conda install -c pyviz geoviews . Otherwise, you can also use pip install geoviews as long as you satisfy the cartopy dependency yourself."
},
{
"objectID": "posts/gv_release_1.5/index.html#bokeh-support-for-projections",
"href": "posts/gv_release_1.5/index.html#bokeh-support-for-projections",
"title": "GeoViews 1.5 Release",
"section": "Bokeh support for projections",
"text": "Bokeh support for projections\nIn the past the Bokeh backend for GeoViews only supported displaying plots in Web Mercator coordinates. In this release this limitation was lifted and plots may now be projected to almost all supported Cartopy projections (to see the full list see the user guide):\n\ncities = pd.read_csv(gv_path+'/cities.csv', encoding=\"ISO-8859-1\")\npoints = gv.Points(cities[cities.Year==2050], ['Longitude', 'Latitude'], ['City', 'Population'])\nfeatures = gf.ocean * gf.land * gf.coastline\n\noptions = dict(width=600, height=350, global_extent=True,\n show_bounds=True, color='black', tools=['hover'], axiswise=True,\n color_index='Population', size_index='Population', size=0.002, cmap='viridis')\n\n(features * points.options(projection=ccrs.Mollweide(), **options) +\n features * points.options(projection=ccrs.PlateCarree(), **options))"
},
{
"objectID": "posts/gv_release_1.5/index.html#new-elements",
"href": "posts/gv_release_1.5/index.html#new-elements",
"title": "GeoViews 1.5 Release",
"section": "New elements",
"text": "New elements\nThe other main enhancements to GeoViews in the 1.5 release come from the addition of a wide array of new elements, some of which were recently added in HoloViews and others which have been newly made aware of geographic coordinate systems and added to Geoviews.\n\nGraph\nThe first such addition is the new Graph element which was added to HoloViews 1.9 and has now been made aware of geographic coordinates. The example below (available in the gallery) demonstrates how to use the Graph element to display airport routes from Hawaii with great-circle paths:\n\n\n\n\n\n\n\n\n\n\nVectorField\nAnother element that has been available in HoloViews and now been made aware of geographic coordinates is VectorField, useful for displaying vector quantities on a map. Like most HoloViews and GeoViews elements it can be rendered using both Bokeh (left) and Matplotlib (right):\n\n\n\n\n\n\n\n\n\n\nTriMesh\nAlso building on the graph capabilities is the TriMesh element, which allows defining arbitrary meshes from a set of nodes and a set of simplices (triangles defined as lists of node indexes). The TriMesh element allows easily visualizing Delaunay triangulations and even very large meshes, thanks to corresponding support added to datashader. Below we can see a small TriMesh displayed as a wire frame and an interpolated datashaded mesh of the Chesapeake Bay containing 1M triangles:\n\n\n\n\n\n\n\n\n\n\n\n\nQuadMesh\nGeoViews has long had an Image element that supports regularly sampled, rectilinear meshes similar to matplotlib’s imshow. To plot irregularly sampled rectilinear and curvilinear meshes, GeoViews now also has a QuadMesh element (akin to matplotlib’s pcolormesh). Below is a curvilinear mesh loaded from xarray:\n\n\n\n\n\n\n\n\n\n\nHexTiles\nAnother often requested feature is a hexagonal bin plot, which can be very helpful in visualizing large collections of points. Thanks to the recent addition of a hex tiling glyph in the bokeh 0.12.15 release it was straightforward to add this support in the form of a [HexTiles element]((http://holoviews.org/reference/elements/bokeh/HexTiles.html), which supports both simple bin counts and weighted binning, and fixed or variable hex sizes.\nBelow we can see a HexTiles plot of ~7 million points representing the NYC population, where each hexagonal bin is scaled and colored by the bin value:\n\n\n\n\n\n\n\n\n\n\nLabels\nThe existing Text element allows adding text to a plot, but only one item at a time, which is not suitable for plotting the large collections of text items that many users have been requesting. The new Labels element provides vectorized text plotting, which is probably most often used to annotate data points or regions of another plot type. Here we select the 20 most populous cities in 2050, plot them using the Points element, and use the Labels element to label each point:"
},
{
"objectID": "posts/gv_release_1.5/index.html#features",
"href": "posts/gv_release_1.5/index.html#features",
"title": "GeoViews 1.5 Release",
"section": "Features",
"text": "Features\nApart from the new collection of elements that were added, GeoViews 1.5 also comes with an impressive set of new features and enhancements.\n\nInbuilt Tile Sources\nSince plotting on top of a map tile source is such a common and useful feature, a new tile_sources module has been added to GeoViews. The new geoviews.tile_sources module includes a number of commonly used tile sources from CartoDB, Stamen, ESRI, OpenStreetMap and Wikipedia, a small selection of which is shown below:\n\nimport geoviews.tile_sources as gvts\n\n(gvts.CartoLight + gvts.CartoEco + gvts.ESRI + gvts.OSM + gvts.StamenTerrain + gvts.Wikipedia).cols(3)\n\n\n\n\n\n\nDatashader & xESMF regridding\nWhen working with mesh and raster data in a geographic context it is frequently useful to regrid the data. In this release we have improved support for regridding and rasterizing rectilinear and curvilinear grids and trimeshes using the Datashader and xESMF libraries. For a detailed overview of these capabilities see the user guide. As a quick summary:\n\nDatashader provides capabilities to quickly rasterize and regrid data of all kinds (Image, RGB, HSV, QuadMesh, TriMesh, Path, Points and Contours) but does not support complex interpolation and weighting schemes\nxESMF can regrid between general recti- and curvi-linear grids (Image and QuadMesh) with all ESMF regridding algorithms, such as bilinear, conservative and nearest neighbour\n\nBelow you can see the curvilinear mesh displayed above regridded and interpolated using xESMF:\n\n\nReuse existing file: bilinear_(-179.877, 179.749)_(16.334, 89.638)_400x400.nc\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHover now displays lat/lon coordinates\nAs you may have noticed when hovering over some of the plots in this blog post, the hover tooltips now automatically format coordinates as latitudes and longitudes rather than the previous (and mostly useless) Web Mercator coordinates.\n\n\nOperations now CRS aware\nIn the past when operations defined in HoloViews were applied to GeoViews elements, the coordinate reference system (CRS) of the data was ignored and a HoloViews element was returned. Thanks to the ability to register pre- and post-processors for operations, operations such as datashade, rasterize, contours and bivariate_kde will now retain the coordinate system of the data.\nAs a simple example we will use the bivariate_kde operation from HoloViews to generate a density map from a set of points. Here the PlateCarree crs is retained throughout the operation so that the returned Contours element is appropriately projected on top of the tile source:\n\nfrom holoviews.operation.stats import bivariate_kde\n\npopulation = gv.Points(cities[cities.Year==2050], ['Longitude', 'Latitude'], 'Population')\n\ngvts.StamenTerrainRetina * bivariate_kde(population, bandwidth=0.1).options(\n width=500, height=450, show_legend=False, is_global=True\n).relabel('Most populous city density map')\n\n\n\n\n\n\nProjection operation improved\nThe gv.project operation provides a high-level wrapper for projecting all GeoViews element types and now has better handling for polygons and paths as well as all the new element types added in this release."
},
{
"objectID": "posts/gv_release_1.5/index.html#improved-documentation-gallery",
"href": "posts/gv_release_1.5/index.html#improved-documentation-gallery",
"title": "GeoViews 1.5 Release",
"section": "Improved documentation & gallery",
"text": "Improved documentation & gallery\nThis release was also accompanied by an overhaul of the existing documentation, specifically an improved user guide on projections and a whole new gallery with a wide (and expanding) selection of examples."
},
{
"objectID": "posts/ds_release_0.13/index.html#what-is-datashader",
"href": "posts/ds_release_0.13/index.html#what-is-datashader",
"title": "Datashader 0.13 Release",
"section": "What is Datashader?",
"text": "What is Datashader?\nDatashader is an open-source Python library for rendering large datasets quickly and accurately. Datashader provides highly optimized, scalable support for rasterizing your data into a fixed-size array for pixel-based displays, while avoiding overplotting and other issues that make it difficult to work with large datasets. Datashader works well on its own, but it is even more powerful when embedded into an interactive plotting library like Bokeh, Plotly, or (now!) Matplotlib."
},
{
"objectID": "posts/ds_release_0.13/index.html#announcing-datashader-0.13",
"href": "posts/ds_release_0.13/index.html#announcing-datashader-0.13",
"title": "Datashader 0.13 Release",
"section": "Announcing Datashader 0.13!",
"text": "Announcing Datashader 0.13!\nWe are very pleased to announce the 0.12.1 and 0.13 releases of Datashader! These releases include new features from a slew of different contributors, plus maintenance and bug fixes from Jim Bednar, Philipp Rudiger, Peter Roelants, Thuy Do Thi Minh, Chris Ball, and Jean-Luc Stevens.\nWhat’s new: - Matplotlib Artist for Datashader - Much more powerful categorical plotting - dynspread that actually works! - Aggregate spreading - Anti aliasing (experimental) - Datashader support in Dash - Inspect_points for interactive exploration in HoloViews"
},
{
"objectID": "posts/ds_release_0.13/index.html#matplotlib-artist-for-datashader",
"href": "posts/ds_release_0.13/index.html#matplotlib-artist-for-datashader",
"title": "Datashader 0.13 Release",
"section": "Matplotlib Artist for Datashader",
"text": "Matplotlib Artist for Datashader\nThanks to Nezar Abdennur (nvictus), Trevor Manz, Thomas Caswell, and Philipp Rudiger.\nDatashader works best when embedded in an interactive plotting library so that data can be revealed at every spatial scale by zooming and panning. Thomas Caswell made a draft of Datashader support for Matplotlib during SciPy 2016 when Datashader was first announced, but there was still a lot of work needed to make it general. Various people made suggestions, but largely the sketch sat patiently waiting for someone to finish it. In the meantime, Thomas Robitaille made a simpler points-only renderer https://github.com/astrofrog/mpl-scatter-density, which is useful if that’s all that’s needed. During sprints at SciPy 2020, Nezar Abdennur and Trevor Manz rescuscitated Tom’s work, and it’s now been released at last! You can now use all the power of Datashader with any of Matplotlib’s many backends, e.g. here for the osx backend:\nimport matplotlib.pyplot as plt, dask.dataframe as dd \nimport datashader as ds, colorcet as cc \nimport datashader.transfer_functions as tf \nfrom datashader.mpl_ext import dsshow \n%matplotlib osx \n\ndf = dd.read_parquet('data/nyc_taxi_wide.parq').compute() \n\ndsshow(df, ds.Point('dropoff_x', 'dropoff_y'), norm='eq_hist', \n cmap=cc.gray[::-1], shade_hook=tf.dynspread); \n\n\n\n\nSee getting_started/Interactivity to see how to use it."
},
{
"objectID": "posts/ds_release_0.13/index.html#much-more-powerful-categorical-plotting",
"href": "posts/ds_release_0.13/index.html#much-more-powerful-categorical-plotting",
"title": "Datashader 0.13 Release",
"section": "Much more powerful categorical plotting",
"text": "Much more powerful categorical plotting\nThanks to Michael Ihde (@maihde), Oleg Smirnov, Philipp Rudiger, and Jim Bednar.\nOne of Datashader’s most powerful features is its categorical binning and categorical colormapping, which allow detailed understanding of how the distribution of data differs by some other variable, such as this plot of how population is segregated by race in New York City:\n\nTo build such a plot, Datashader calculates a stack of aggregate arrays simulaneously, one per category, instead of a single aggregate array as in the non-categorical case.\nPreviously, categorical binning and plotting was limited to a count() reduction, i.e., counting how many datapoints fell into each pixel, by category, implemented using a special cat_count() reduction. Categorical plotting has now been fully generalized into a new ds.by() reduction, which accepts a categorical column along with count() or any other reduction (max(), min(), mean(), sum(), etc.). Thus it’s now possible to plot the mean value of any column, per pixel, per category. See the Pipeline docs for details.\nYou can also now use categorical binning and plotting with numerical columns using new functions category_modulo and category_binning, which opens up entirely new applications for Datashader. category_binning effectively gives Datashader the power to do 3D aggregations of numeric axes, not just the usual 2D. For instance, by(category_binning('z', 0, 10, 16)) will bin by the floating-point column z, counting datapoints in each of 16 categories (0: 0<=z<10, 1: 10<=x<20, etc.). Combining category_binning with by, you can now do complex 3D binning like computing the maximum age in each (x, y, weight) range:\ncat = ds.category_binning('weight', lower=0, higher=200, nbins=10)\nagg = canvas.points(df,'x','y', agg=ds.by(cat, ds.max('age')))\ncategory_modulo is useful when working with very large numbers of unsorted integers, using a modulo operator on an integer column to reduce a large number of columns down to something more tractable for plotting.\nSee #875 and #927 for details on by, category_modulo, and category_binning (currently documented only at https://github.com/holoviz/datashader/pull/927#issuecomment-725991064)."
},
{
"objectID": "posts/ds_release_0.13/index.html#dynspread-that-actually-works",
"href": "posts/ds_release_0.13/index.html#dynspread-that-actually-works",
"title": "Datashader 0.13 Release",
"section": "dynspread that actually works!",
"text": "dynspread that actually works!\nThanks to Jim Bednar.\nDatashader’s points plotting is designed to aggregate datapoints by pixel, accurately counting how many datapoints fell into each pixel. For large datasets, such a plot will accuratelyl reveal the spatial distribution of the data over the axes plotted. However, a consequence is that an individual data point not surrounded by others will show up as a single pixel, which can be difficult to see on a high-resolution monitor, and it is almost impossible to see its color. To alleviate this issue and make it easier to go back and forth between the big picture and individual datapoints, Datashader has long offered the dynspread output-transformation function, which takes each pixel and dilates it (increases it in size) until the density of such points reaches a specified metric value. However, dynspread never worked very well in practice, always either doing no spreading or one step of spreading (a 3x3 kernel). After a fresh look at the code, it became clear that the first step of spreading was artificially increasing the amount of estimated pixel density, making it very unlikely that a second or third step would ever be done.\ndynspread now spreads each pixel by an integer radius px up to the maximum radius max_px, stopping earlier if a specified fraction of data points have non-empty neighbors within the radius. This new definition provides predictable, well-behave dynspread behavior even for large values of max_px, making isolated datapoints easily visible. (#1001)\n\n\n\n\nNote that this definition is only compatible with points, as they are spatially isolated; any usage of dynspread with datatypes other than points should be replaced with spread(), which will do what was probably intended by the original dynspread call anyway (i.e., to make a line or polygon edge thicker)."
},
{
"objectID": "posts/ds_release_0.13/index.html#aggregate-spreading",
"href": "posts/ds_release_0.13/index.html#aggregate-spreading",
"title": "Datashader 0.13 Release",
"section": "Aggregate spreading",
"text": "Aggregate spreading\nThanks to Jean-Luc Stevens.\nSpreading previously worked only on RGB arrays, not numerical aggregate arrays, which meant that Datashader users had to choose between seeing isolated datapoints and having interactive features like Bokeh’s hover tool and colorbars that require access to the numerical aggregate values. spread and dynspread now work equally well with either RGB aggregates or numerical aggregates, and we now recommend that users spread at the numerical aggregate level in all supported cases. E.g. in holoviews, use spread(rasterize(obj)).opts(cnorm='eq_hist', cmap='fire') (or cnorm='log') instead of datashade(obj, cmap='fire'), and you’ll now have colorbar and hover support using Bokeh 2.3.3 or later. (#771\nimport dask.dataframe as dd, holoviews as hv\nfrom holoviews.operation.datashader import rasterize, dynspread\nimport bokeh, datashader as ds\nhv.extension(\"bokeh\")\n\ndf = dd.read_parquet('data/nyc_taxi_wide.parq').compute()\npts = hv.Points(df, ['dropoff_x', 'dropoff_y'])\nopts = hv.opts.Image(cnorm='log', colorbar=True, width=700, tools=['hover'])\ndynspread(rasterize(pts)).opts(opts)"
},
{
"objectID": "posts/ds_release_0.13/index.html#anti-aliasing-experimental",
"href": "posts/ds_release_0.13/index.html#anti-aliasing-experimental",
"title": "Datashader 0.13 Release",
"section": "Anti-aliasing (experimental)",
"text": "Anti-aliasing (experimental)\nThanks to Valentin Haenel.\nDatashader’s line aggregations (also used in trimesh and network plotting) count how many times a line crosses a given pixel. The resulting line plots are very blocky, because of binary transitions between rows and columns depending on where the underlying line lands in the aggregate array grid. To improve appearance of such lines (at a cost of making them less easy to interpret as counts of crossings), Datashader now supports antialiased lines. This support is only partial and is still experimental; it’s enabled by adding antialias=True to the Canvas.line() method call and is currently restricted to sum and max reductions only, and to a single-pixel line width. (#916)\n\n\nThe remaining updates listed below are shipped in other packages, not Datashader itself, but provide additional power for Datashader users."
},
{
"objectID": "posts/ds_release_0.13/index.html#datashader-support-in-dash",
"href": "posts/ds_release_0.13/index.html#datashader-support-in-dash",
"title": "Datashader 0.13 Release",
"section": "Datashader support in Dash",
"text": "Datashader support in Dash\nThanks to Jon Mease.\nThe Dash package for deploying data-science dashboards now supports Datashader using the high-level HoloViews Plotly backend. HoloViews Plotly, Matplotlib, and Bokeh plots can now be deployed using either a Bokeh-based server, which supports user-specific state that makes programmimg simpler, or a Dash-based server, which has a stateless model that can support larger numbers of concurrent users on a given set of server hardware."
},
{
"objectID": "posts/ds_release_0.13/index.html#inspect-function-for-interactive-exploration-in-holoviews",
"href": "posts/ds_release_0.13/index.html#inspect-function-for-interactive-exploration-in-holoviews",
"title": "Datashader 0.13 Release",
"section": "inspect function for interactive exploration in HoloViews",
"text": "inspect function for interactive exploration in HoloViews\nThanks to Jean-Luc Stevens and Philipp Rudiger.\nHoloViews has always been an easy way to work with interactive Datashader plots by handling user events, requesting an updated Datashader plot, and rendering the results. However, the resulting plots always showed only an aggregated view of the data, no matter how much the user zoomed in. HoloViews 1.14.4 now ships with inspect_points() and inspect_polygons wrapped in a general inspect function that uses Datashader’s aggregate to determine if there is data in a local region, then queries the original dataset to return those specific points and all their metadata. The result is that you can now view all of your data using Datashader, while still being able to see individual data points using hover or selection.\nSee the new ship_traffic example for how to use inspect_points and the NYC Buildings example for how to use inspect_polygons. Also see HoloViews linked brushing for related functionality that supports linked selections on Datashader and other plots."
},
{
"objectID": "posts/ds_release_0.13/index.html#help-us",
"href": "posts/ds_release_0.13/index.html#help-us",
"title": "Datashader 0.13 Release",
"section": "Help us!",
"text": "Help us!\nDatashader is an open-source project and we are always looking for new contributors. Join us the discussion on the Discourse and we would be very excited to get you started contributing! Also please get in touch with us if you work at an organization that would like to support future Datashader development, fund new Datashader features, or set up a support contract."
},
{
"objectID": "posts/quarto_migration/index.html",
"href": "posts/quarto_migration/index.html",
"title": "Reviving the blog with Quarto",
"section": "",
"text": "Following the tradition, we have decided that our first post after migrating to Quarto would be about the migration itself!"
},
{
"objectID": "posts/quarto_migration/index.html#why-change",
"href": "posts/quarto_migration/index.html#why-change",
"title": "Reviving the blog with Quarto",
"section": "Why change?",
"text": "Why change?\nThe HoloViz blog dates back to 2018 and at the time Pelican was chosen as the static site generator together with the pelican-jupyter plugin to add support to authoring blog posts from Jupyter Notebooks. While this combination served us well over the years, we observed that the notebook plugin was deprecated and that there was not much interest among our maintainers and contributors to update the existing site which was starting to show its age. We were in desperate need of a change!\n\n\nPelican version of the blog"
},
{
"objectID": "posts/quarto_migration/index.html#choosing-a-framework",
"href": "posts/quarto_migration/index.html#choosing-a-framework",
"title": "Reviving the blog with Quarto",
"section": "Choosing a framework",
"text": "Choosing a framework\nOne of our key requirements was to build the site from Jupyter Notebooks as the HoloViz tools have first-class notebook support and that is how we generally build our documentation websites. For that purpose we’re usually using Sphinx together with MyST-NB and some other custom extensions. However, except from the ABlog extension, the Sphinx ecosystem didn’t seem to provide what we were looking after and ABlog lacked some features we were potentially interested in (e.g. good integration for sharing on social media). This didn’t leave us with many options other than Quarto!\nQuarto is a recent open-source project that was announced in July 2022 and that is sponsored by Posit (formerly known as RStudio). It extends R Markdown, adding for instance, Jupyter Notebook support. We started experimenting with Quarto once we noticed increasing discussion about it from HoloViz users; we wanted to make sure our tools were working well in that ecosystem and the blog seemed to be a good place to start.\nWe were quickly convinced that Quarto was the right choice: the user experience was smooth, their documentation was clear and all in one place (unlike the Sphinx ecosystem where we had to navigate between various extension websites) and it appeared to support all the features we required. The only point that made us hesitate was that Quarto extensions have to be authored in Lua and none of us had any experience in that language. We decided that this wasn’t a blocker and went ahead with the migration."
},
{
"objectID": "posts/quarto_migration/index.html#migrating-to-quarto",
"href": "posts/quarto_migration/index.html#migrating-to-quarto",
"title": "Reviving the blog with Quarto",
"section": "Migrating to Quarto",
"text": "Migrating to Quarto\nThe migration all happened in this PR:\n\nWe had to convert the <post>.ipynb-meta sidecar files used by the pelican-jupyter files to the special header Quarto needs at the beginning of every document.\nThe notebooks themselves needed few changes, except to handle the nested and indented raw HTML included in Markdown cells that wasn’t displayed as HTML by Quarto but partially wrapped in a <code> HTML element. Removing the indentation fixed this problem (wrapping it in :::{=html} <... ::: would also have worked).\nWe had to move all the posts to the /posts directory which meant that the links to our old blog posts changed. We set up redirect links using the aliases document option to preserve these old links.\nWe decided that we preferred the default listing layout instead of grid.\nWe made some minor styling changes to align it with the styling of other HoloViz websites.\n\n\n\n\nQuarto version of the blog\n\n\nWhile the migration was quick and went smoothly, we listed a few issues that we might fix in future iterations. We are not too surprised that we have a few minor issues as our blog posts often contain a lot of complex HTML and Javascript that aren’t always easy to handle. We welcome contributions!"
},
{
"objectID": "posts/quarto_migration/index.html#easier-contribution",
"href": "posts/quarto_migration/index.html#easier-contribution",
"title": "Reviving the blog with Quarto",
"section": "Easier contribution",
"text": "Easier contribution\nMoving to Quarto improved the contributor experience, with a solid VSCode extension and a nice and fast preview mode, and again their excellent documentation.\nWe also made our infrastructure easier to manage which improved the contributor experience:\n\nthe site is no longer hosted on AWS but on Github Pages\na development version has been deployed, it is re-built and re-deployed automatically on every Pull Request event\nthe main site is re-built and re-deployed whenever a Pull Request is merged\n\nIf you feel like contributing to the HoloViz blog, head over to its Github repo and follow the instructions!"
},
{
"objectID": "posts/openai_logprobs_colored/index.html",
"href": "posts/openai_logprobs_colored/index.html",
"title": "Evaluate and filter LLM output using logprobs & colored text",
"section": "",
"text": "In many cases, there’s no indication of how confident the model is in its output; LLMs simply try to generate the most likely text based on the input and the model’s training data.\nHowever, with the logprobs parameter, we can now visualize the confidence of the model’s output.\nThis blog demonstrates how to color the text based on the log probabilities of the tokens. The higher the log probability, the more confident the model is in the token.\nThis is useful if you want to…\n\nbetter understand how your system prompt is affecting the model’s output\ncalibrate the model’s temperature to achieve the desired confidence level\nfilter out low-confidence outputs to lessen hallucinations\nsee whether incorporating retrieval augmented generation (RAG) can increase the confidence of the model’s output\nevaluate whether the model’s version affects the confidence of the output\n\n\n\n\nDemo"
},
{
"objectID": "posts/openai_logprobs_colored/index.html#introduction",
"href": "posts/openai_logprobs_colored/index.html#introduction",
"title": "Evaluate and filter LLM output using logprobs & colored text",
"section": "",
"text": "In many cases, there’s no indication of how confident the model is in its output; LLMs simply try to generate the most likely text based on the input and the model’s training data.\nHowever, with the logprobs parameter, we can now visualize the confidence of the model’s output.\nThis blog demonstrates how to color the text based on the log probabilities of the tokens. The higher the log probability, the more confident the model is in the token.\nThis is useful if you want to…\n\nbetter understand how your system prompt is affecting the model’s output\ncalibrate the model’s temperature to achieve the desired confidence level\nfilter out low-confidence outputs to lessen hallucinations\nsee whether incorporating retrieval augmented generation (RAG) can increase the confidence of the model’s output\nevaluate whether the model’s version affects the confidence of the output\n\n\n\n\nDemo"
},
{
"objectID": "posts/openai_logprobs_colored/index.html#tldr",
"href": "posts/openai_logprobs_colored/index.html#tldr",
"title": "Evaluate and filter LLM output using logprobs & colored text",
"section": "TLDR",
"text": "TLDR\nHere’s the full code below.\nHighlights:\n\nPanel to create a chat interface and input widgets to control LLM’s parameters\nTastyMap to generate a limited color palette to map to the log probabilities\nthe logprobs is extracted from the model’s response to use for coloring the text\n\nContinue reading for a simple version of the following code, which additionally features playground-like widgets to control the model’s parameters and system prompt.\nimport os\nimport re\n\nimport numpy as np\nimport panel as pn\nimport tastymap as tm\nfrom openai import AsyncOpenAI\n\npn.extension()\n\nCOLORMAP = \"viridis_r\"\nNUM_COLORS = 8\nSYSTEM_PROMPT = \"\"\"\nBased on the text, classify as one of these options:\n- Feature\n- Bug\n- Docs\nAnswer in one word; no other options are allowed.\n\"\"\".strip()\n\n\ndef color_by_logprob(text, log_prob):\n linear_prob = np.round(np.exp(log_prob) * 100, 2)\n # select index based on probability\n color_index = int(linear_prob // (100 / (len(colors) - 1)))\n\n # Generate HTML output with the chosen color\n if \"'\" in text:\n html_output = f'<span style=\"color: {colors[color_index]};\">{text}</span>'\n else:\n html_output = f\"<span style='color: {colors[color_index]}'>{text}</span>\"\n return html_output\n\n\ndef custom_serializer(content):\n pattern = r\"<span.*?>(.*?)</span>\"\n matches = re.findall(pattern, content)\n if not matches:\n return content\n return matches[0]\n\n\nasync def respond_to_input(contents: str, user: str, instance: pn.chat.ChatInterface):\n if api_key_input.value:\n aclient.api_key = api_key_input.value\n elif not os.environ[\"OPENAI_API_KEY\"]:\n instance.send(\"Please provide an OpenAI API key\", respond=False, user=\"ChatGPT\")\n\n # add system prompt\n if system_input.value:\n system_message = {\"role\": \"system\", \"content\": system_input.value}\n messages = [system_message]\n else:\n messages = []\n\n # gather messages for memory\n if memory_toggle.value:\n messages += instance.serialize(custom_serializer=custom_serializer)\n else:\n messages.append({\"role\": \"user\", \"content\": contents})\n\n # call API\n response = await aclient.chat.completions.create(\n model=model_selector.value,\n messages=messages,\n stream=True,\n logprobs=True,\n temperature=temperature_input.value,\n max_tokens=max_tokens_input.value,\n seed=seed_input.value,\n )\n\n # stream response\n message = \"\"\n async for chunk in response:\n choice = chunk.choices[0]\n content = choice.delta.content\n log_probs = choice.logprobs\n if content and log_probs:\n log_prob = log_probs.content[0].logprob\n message += color_by_logprob(content, log_prob)\n yield message\n\n\ntmap = tm.cook_tmap(COLORMAP, NUM_COLORS)\ncolors = tmap.to_model(\"hex\")\n\naclient = AsyncOpenAI()\napi_key_input = pn.widgets.PasswordInput(\n name=\"API Key\",\n placeholder=\"sk-...\",\n width=150,\n)\nsystem_input = pn.widgets.TextAreaInput(\n name=\"System Prompt\",\n value=SYSTEM_PROMPT,\n rows=1,\n auto_grow=True,\n)\nmodel_selector = pn.widgets.Select(\n name=\"Model\",\n options=[\"gpt-3.5-turbo\", \"gpt-4\"],\n width=150,\n)\ntemperature_input = pn.widgets.FloatInput(\n name=\"Temperature\", start=0, end=2, step=0.01, value=1, width=100\n)\nmax_tokens_input = pn.widgets.IntInput(name=\"Max Tokens\", start=0, value=256, width=100)\nseed_input = pn.widgets.IntInput(name=\"Seed\", start=0, end=100, value=0, width=100)\nmemory_toggle = pn.widgets.Toggle(\n name=\"Include Memory\", value=False, width=100, margin=(22, 5)\n)\nchat_interface = pn.chat.ChatInterface(\n callback=respond_to_input,\n callback_user=\"ChatGPT\",\n callback_exception=\"verbose\",\n)\n\npn.Column(\n pn.Row(\n api_key_input,\n system_input,\n model_selector,\n temperature_input,\n max_tokens_input,\n seed_input,\n memory_toggle,\n align=\"center\",\n ),\n pn.Row(tmap._repr_html_(), align=\"center\"),\n chat_interface,\n).show()"
},
{
"objectID": "posts/openai_logprobs_colored/index.html#building-the-app",
"href": "posts/openai_logprobs_colored/index.html#building-the-app",
"title": "Evaluate and filter LLM output using logprobs & colored text",
"section": "Building the app",
"text": "Building the app\nTo get started, I usually envision the key components of the app and then build them out one by one.\nAs the first step, let’s try to extract the log probabilities from the model’s streaming response.\nfrom openai import AsyncOpenAI\n\naclient = AsyncOpenAI()\n\nasync def get_log_probs(contents: str):\n response = await aclient.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=[{\"role\": \"user\", \"content\": contents}],\n stream=True,\n logprobs=True,\n )\n\n token_log_probs = {}\n async for chunk in response:\n choice = chunk.choices[0]\n content = choice.delta.content\n log_probs = choice.logprobs\n if content and log_probs:\n log_prob = log_probs.content[0].logprob\n token_log_probs[content] = log_prob\n return token_log_probs\n\nlog_probs = await get_log_probs(\"Say dog or cat.\")\nlog_probs\nOutput: {'Dog': -0.32602254, '.': -0.4711762}\nThese are the log probabilities of the tokens in the response, but they are not exactly intuitive.\nWe can convert these log probabilities to linear probabilities using this formula.\n\nimport numpy as np\n\nfor token, log_prob in log_probs.items():\n linear_prob = np.round(np.exp(log_prob) * 100, 2)\n print(f\"{token}: {linear_prob}%\")\nOutput:\nDog: 72.18%\n.: 62.43%\nNow that we have the linear probabilities, we can map them to a color palette using TastyMap.\nLet’s first try coloring some text in Panel.\nimport panel as pn\n\npn.extension()\n\ntext = \"This is a test sentence.\"\ncolor = \"red\"\nhtml_output = f\"<span style='color: {color}'>{text}</span>\"\npn.pane.Markdown(html_output)\n\n\n\nred sentence\n\n\nGreat, the text is now colored in red.\nWith that knowledge, we can map the linear probabilities to a color palette using TastyMap and display the colorbar.\n\nimport panel as pn\nimport tastymap as tm\n\npn.extension()\n\nCOLORMAP = \"viridis_r\"\nNUM_COLORS = 8\n\ndef color_by_logprob(text, log_prob):\n linear_prob = np.round(np.exp(log_prob) * 100, 2)\n # select index based on probability\n color_index = int(linear_prob // (100 / (len(colors) - 1)))\n\n # Generate HTML output with the chosen color\n if \"'\" in text:\n html_output = f'<span style=\"color: {colors[color_index]};\">{text}</span>'\n else:\n html_output = f\"<span style='color: {colors[color_index]}'>{text}</span>\"\n return html_output\n\n\ntmap = tm.cook_tmap(COLORMAP, NUM_COLORS)\ncolors = tmap.to_model(\"hex\")\nhtml = \"\"\nfor token, log_prob in log_probs.items():\n html += color_by_logprob(token, log_prob)\n\npn.Column(tmap._repr_html_(), pn.pane.HTML(html))\nNext, we can link everything together in a simple chat interface using Panel.\nUse the callback keyword argument to specify the function that will handle the user’s input.\nHere, we use the respond_to_input function to handle the user’s input, which\n\nsends the user’s input to the OpenAI API\nreceives the model’s response\nextracts the log probabilities from the response\ncolors the text based on the log probabilities\nyields (streams) the colored text back to the chat interface\n\nimport panel as pn\nimport tastymap as tm\n\npn.extension()\n\nCOLORMAP = \"viridis_r\"\nNUM_COLORS = 8\n\ndef color_by_logprob(text, log_prob):\n linear_prob = np.round(np.exp(log_prob) * 100, 2)\n # select index based on probability\n color_index = int(linear_prob // (100 / (len(colors) - 1)))\n\n # Generate HTML output with the chosen color\n if \"'\" in text:\n html_output = f'<span style=\"color: {colors[color_index]};\">{text}</span>'\n else:\n html_output = f\"<span style='color: {colors[color_index]}'>{text}</span>\"\n return html_output\n\nasync def respond_to_input(contents: str, user: str, instance: pn.chat.ChatInterface):\n response = await aclient.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=[{\"role\": \"user\", \"content\": contents}],\n stream=True,\n logprobs=True,\n )\n\n message = \"\"\n async for chunk in response:\n choice = chunk.choices[0]\n content = choice.delta.content\n log_probs = choice.logprobs\n if content and log_probs:\n log_prob = log_probs.content[0].logprob\n message += color_by_logprob(content, log_prob)\n yield message\n\ntmap = tm.cook_tmap(COLORMAP, NUM_COLORS)\ncolors = tmap.to_model(\"hex\")\n\nchat_interface = pn.chat.ChatInterface(\n callback=respond_to_input,\n callback_user=\"ChatGPT\",\n callback_exception=\"verbose\",\n)\nchat_interface.send(\"Say dog or cat.\")\npn.Column(\n tmap._repr_html_(),\n chat_interface,\n align=\"center\",\n).servable()\n\n\n\nsimple app"
},
{
"objectID": "posts/openai_logprobs_colored/index.html#conclusion",
"href": "posts/openai_logprobs_colored/index.html#conclusion",
"title": "Evaluate and filter LLM output using logprobs & colored text",
"section": "Conclusion",
"text": "Conclusion\nCongrats! You’ve built a chat interface that colors the text based on the log probabilities of the tokens in the model’s response.\nFeel free to study the code above and modify it to suit your needs; in the TLDR section, I have additionally added widgets to control the model’s parameters and system prompt!\nIf you are interested in learning more about how to build AI chatbots in Panel, please read our related blog posts:\n\nBuild a Mixtral Chatbot with Panel\nBuilding AI Chatbots with Mistral and Llama2\nBuilding a Retrieval Augmented Generation Chatbot\nHow to Build Your Own Panel AI Chatbots\nBuild a RAG chatbot to answer questions about Python libraries\nBuild an AI Chatbot to Run Code and Tweak plots\n\nIf you find Panel useful, please consider giving us a star on Github (https://github.com/holoviz/panel). If you have any questions, feel free to ask on our Discourse. Happy coding!"
},
{
"objectID": "posts/examples-website-modernization/index.html",
"href": "posts/examples-website-modernization/index.html",
"title": "HoloViz Examples Gallery Modernization",
"section": "",
"text": "HoloViz is a collection of open-source tools designed to make Python data visualization easier and more powerful. It includes tools like Panel, hvPlot, HoloViews, and Datashader. Most tools have their own gallery, such as the Panel App Gallery or the HoloViews Gallery, which focus on their specific features and APIs.\nThe HoloViz Examples Gallery is different. It showcases more than 40 real-world examples that combine multiple HoloViz tools into domain-specific workflows (geospatial, finance, neuroscience, mathematics, cybersecurity, etc.). These examples go beyond demonstrating individual tools—they tell data stories. This makes it easier to understand how to use the tools together and apply them to practical problems.\nThe Examples Gallery isn’t just a learning resource. It’s also a great way for users to contribute to HoloViz. By bringing their domain expertise, contributors can add new examples that reflect unique use cases and ideas. Updated examples, better contributor guides, and a clear process for adding content make it easier than ever to join the HoloViz community.\nThanks to a NumFocus small development grant, we’ve made significant improvements to the Examples Gallery, and we’re excited to share the details in this blog post.\n\n\n\n\nRun Anywhere: Each example is available as an anaconda-project1 zip file, so you can run it with the correct dependencies and datasets on any platform. Learn how here.\nInteractive Options: Most examples include a read-only notebook (try one) or an interactive Panel app (try one). These let you explore interactively, even when full interactivity isn’t possible on the static site. Thanks to Anaconda for hosting these!\n\nMaintaining a large and complex collection of examples is a big challenge for an open-source team. Over time, the gallery became outdated, relying on tools and APIs that no longer reflected best practices. Updating the infrastructure was straightforward, but refreshing the content took much more work. That’s why we applied for a NumFocus Small Development Grant in late 2022, to modernize the gallery and bring in new contributors.\nWith the $10,000 grant awarded in early 2023, two new contributors, Isaiah and Jason, joined the project. Together, they tackled these goals:\n\nUpdate high-priority examples to reflect current best practices\nImprove and simplify contributor guidelines\nOrganize examples into categories for easier navigation\nAdd new examples to highlight underrepresented domains\n\nUsing a detailed checklist, Jason and Isaiah—mentored by Demetris and Maxime and supported by critical feedback from other members of the HoloViz team (Jim, Philipp, Simon, and Andrew)—updated and modernized 15 examples. Their work included restructuring examples, enhancing UIs, updating dependencies, and adopting modern APIs. In the rest of this post, we’ll focus on how they improved the APIs."
},
{
"objectID": "posts/examples-website-modernization/index.html#holoviz-simplifying-data-visualization",
"href": "posts/examples-website-modernization/index.html#holoviz-simplifying-data-visualization",
"title": "HoloViz Examples Gallery Modernization",
"section": "",
"text": "HoloViz is a collection of open-source tools designed to make Python data visualization easier and more powerful. It includes tools like Panel, hvPlot, HoloViews, and Datashader. Most tools have their own gallery, such as the Panel App Gallery or the HoloViews Gallery, which focus on their specific features and APIs.\nThe HoloViz Examples Gallery is different. It showcases more than 40 real-world examples that combine multiple HoloViz tools into domain-specific workflows (geospatial, finance, neuroscience, mathematics, cybersecurity, etc.). These examples go beyond demonstrating individual tools—they tell data stories. This makes it easier to understand how to use the tools together and apply them to practical problems.\nThe Examples Gallery isn’t just a learning resource. It’s also a great way for users to contribute to HoloViz. By bringing their domain expertise, contributors can add new examples that reflect unique use cases and ideas. Updated examples, better contributor guides, and a clear process for adding content make it easier than ever to join the HoloViz community.\nThanks to a NumFocus small development grant, we’ve made significant improvements to the Examples Gallery, and we’re excited to share the details in this blog post.\n\n\n\n\nRun Anywhere: Each example is available as an anaconda-project1 zip file, so you can run it with the correct dependencies and datasets on any platform. Learn how here.\nInteractive Options: Most examples include a read-only notebook (try one) or an interactive Panel app (try one). These let you explore interactively, even when full interactivity isn’t possible on the static site. Thanks to Anaconda for hosting these!\n\nMaintaining a large and complex collection of examples is a big challenge for an open-source team. Over time, the gallery became outdated, relying on tools and APIs that no longer reflected best practices. Updating the infrastructure was straightforward, but refreshing the content took much more work. That’s why we applied for a NumFocus Small Development Grant in late 2022, to modernize the gallery and bring in new contributors.\nWith the $10,000 grant awarded in early 2023, two new contributors, Isaiah and Jason, joined the project. Together, they tackled these goals:\n\nUpdate high-priority examples to reflect current best practices\nImprove and simplify contributor guidelines\nOrganize examples into categories for easier navigation\nAdd new examples to highlight underrepresented domains\n\nUsing a detailed checklist, Jason and Isaiah—mentored by Demetris and Maxime and supported by critical feedback from other members of the HoloViz team (Jim, Philipp, Simon, and Andrew)—updated and modernized 15 examples. Their work included restructuring examples, enhancing UIs, updating dependencies, and adopting modern APIs. In the rest of this post, we’ll focus on how they improved the APIs."
},
{
"objectID": "posts/examples-website-modernization/index.html#modernization-apis",
"href": "posts/examples-website-modernization/index.html#modernization-apis",
"title": "HoloViz Examples Gallery Modernization",
"section": "Modernization: APIs",
"text": "Modernization: APIs\n\nPlotting API: Prioritize hvPlot over HoloViews\nMany examples were created before hvPlot was available or mature enough to use. hvPlot offers a simple, Pandas- and Xarray-friendly interface while exposing many capabilities offered by other HoloViz tools. In many cases, we replaced HoloViews code with hvPlot for its accessibility and ease of use. However, hvPlot isn’t a universal replacement—features like complex interactivity (e.g., linked selections, streams) are still exclusive to HoloViews.\nFor instance, the NYC Taxi example creates a scatter plot to see the relationship between distance and fare cost. The modernized version uses hvPlot for clarity and simplicity.\nOriginal code:\nscatter = hv.Scatter(samples, 'trip_distance', 'fare_amount')\nlabelled = scatter.redim.label(trip_distance=\"Distance, miles\", fare_amount=\"Fare, $\") \nlabelled.redim.range(trip_distance=(0, 20), fare_amount=(0, 40)).opts(size=5)\nModernized code:\nsamples.hvplot.scatter(\n 'trip_distance', 'fare_amount', xlabel='Distance, miles',\n ylabel='Fare, $', xlim=(0, 20), ylim=(0, 40), s=5,\n)\n\n\n\nLarge Data Rendering: Prioritize rasterize over datashade for Bokeh Plots\nrasterize and datashade are HoloViews operations powered by Datashader, designed to handle large datasets by transforming elements into images where each pixel represents an aggregate of the underlying data. While both are essential for visualizing large data, they differ in functionality and use cases.\n\ndatashade: Produces an RGB image that is sent directly to the front-end (browser) and displayed as is. This approach offers fast rendering but limits interactivity, such as hover tooltips or color bars, because the raw data is not available to the plotting library.\nrasterize: Generates a multidimensional array of aggregated data, which is sent to the front-end for further processing, such as applying colormaps. Although this requires more work from Bokeh, it allows for richer interactivity, including hover information and client-side color bars.\n\nDue to these advantages, rasterize is now the recommended choice for most large dataset visualizations. Ongoing development continues to expand its capabilities and improve its integration across the HoloViz stack when using Bokeh as the plotting backend.\nFor example, the NYC Taxi example demonstrates how rasterize can render 10 million data points interactively. The plot shows drop-off locations, with passenger counts aggregated per pixel and displayed on hover.\nModernized code:\ndf.hvplot.points(\n 'dropoff_x', 'dropoff_y', rasterize=True, dynspread=True,\n aggregator=ds.sum('passenger_count'), cnorm='eq_hist', cmap=cc.fire[100:],\n xaxis=None, yaxis=None, width=900, height=500, bgcolor='black',\n)\n\n\n\nInteractivity API: Prioritize pn.bind()\nOver the years, Panel has introduced multiple interactive APIs, and choosing the right one can be challenging. As the package has matured and user-feedback incorporated, pn.bind() has become the preferred option for linking widgets to functions while offering flexibility and a cleaner syntax over pn.interact() (deprecated), @pn.depends(), or .param.watch() for most use cases.\nImportantly, exceptions remain, such as the recommendation to use @param.depends() to decorate methods for applications built with param.Parameterized classes, or using .param.watch() for more fine-grained control. Additionally, the Portfolio Optimizer example demonstrates the use of the new reactive expression API (.rx), which extends pn.bind() and the deprecated pn.interact() for reactive programming. This experimental .rx API is a promising addition, and we encourage users to explore it and share feedback.\nIn the Attractors example, we updated the code by replacing the deprecated pn.interact() with pn.bind(). This modernized approach explicitly links widgets to a function that plots an attractor using Datashader.\nOriginal code:\npn.interact(clifford_plot, n=(1,20000000), colormap=ps)\nModernized code:\nwidgets = {\n 'a': pn.widgets.FloatSlider(value=1.9, end=2.0, step=0.1, name='a'),\n 'b': pn.widgets.FloatSlider(value=1.9, end=2.0, step=0.1, name='b'),\n 'c': pn.widgets.FloatSlider(value=1.9, end=2.0, step=0.1, name='c'),\n 'd': pn.widgets.FloatSlider(value=0.8, end=1.0, step=0.1, name='d'),\n 'n': pn.widgets.IntSlider(value=10000000, start=1000, end=20000000, step=100, name='n'),\n 'colormap': pn.widgets.Select(value=ps['bmw'], options=ps, name='colormap'),\n}\n\nbound_clifford_plot = pn.bind(clifford_plot, **widgets)\npn.Column(*widgets.values(), bound_clifford_plot)"
},
{
"objectID": "posts/examples-website-modernization/index.html#improved-contributor-guide",
"href": "posts/examples-website-modernization/index.html#improved-contributor-guide",
"title": "HoloViz Examples Gallery Modernization",
"section": "Improved contributor guide",
"text": "Improved contributor guide\nTo support the modernization efforts and encourage new contributions, the contributor guide was updated to reflect the changes in infrastructure. The guide now provides clearer instructions and step-by-step guidance for new users to create and contribute examples to the gallery.\nTo make the process even more accessible, Isaiah created a detailed video tutorial that walks through each step of contributing a new example."
},
{
"objectID": "posts/examples-website-modernization/index.html#new-example-fifa-world-cup-2018",
"href": "posts/examples-website-modernization/index.html#new-example-fifa-world-cup-2018",
"title": "HoloViz Examples Gallery Modernization",
"section": "New Example: FIFA World Cup 2018",
"text": "New Example: FIFA World Cup 2018\nDriven by his passion for football (soccer), Isaiah contributed an exciting example analyzing data from the FIFA World Cup 2018 tournament. This example delves into the performances of iconic players like Kylian Mbappe and Lionel Messi during the event.\nYou can explore the example in multiple ways:\n\nView the example’s page\nRun the notebook live\nInteract with the Panel app"
},
{
"objectID": "posts/examples-website-modernization/index.html#reflections",
"href": "posts/examples-website-modernization/index.html#reflections",
"title": "HoloViz Examples Gallery Modernization",
"section": "Reflections",
"text": "Reflections\n\nJason’s Reflections\nContributing to the revitalization of the examples website was an enlightening experience for me. Beyond learning about HoloViz tools, I gained a deeper understanding of open source contributions, including the workflow intricacies. This includes creating pull requests when making new changes or opening a new issue to document bugs found in the examples. Neither of which I’ve used when developing my own projects. Setting up the environment was also a tricky process as I had to do it in WSL. This exposure to WSL has helped me when working with other projects that are required to be using Linux.\nOverall, I am thankful to have been given this experience as a contributor as I’ve acquired a fundamental understanding of the tools that could be used.\n\n\nIsaiah’s Reflections\nWorking on this project has not only been an enjoyable experience but also an incredibly educational one. The journey began with a steep learning curve, but overcoming those initial challenges has made the entire process more rewarding.\nKey Learnings\nPanel and HoloViz Libraries\nThe Panel and HoloViz libraries were at the core of our project. Panel, being a high-level app and dashboarding solution for Python, allowed us to create interactive visualizations effortlessly. HoloViz, with its suite of tools designed to work seamlessly together, made data visualization tasks more intuitive and efficient. These tools have significantly enhanced my ability to create compelling and interactive data visualizations.\nDatashader\nModernizing examples using the Datashader library was one of the highlights of the project. Datashader excels at creating meaningful visualizations from large datasets, a critical capability in the age of big data. My extensive use of Datashader has turned it into a reliable tool that I now feel confident using for future projects.\nAnaconda-Project\nAnother crucial aspect of the project was mastering anaconda-project. It facilitated managing project dependencies and environments, ensuring that the project was reproducible at various levels. This experience underscored the importance of reproducibility in data science, which is vital for collaboration and long-term project sustainability.\nOvercoming Challenges\nThe initial phase was riddled with challenges, particularly in setting up the project locally and navigating the submission process for Pull Requests. The support from the project leaders was invaluable. Their guidance helped streamline our workflow, making subsequent tasks more manageable and efficient. This collaborative effort not only improved my technical skills but also reinforced the importance of teamwork and effective communication.\nFuture Prospects\nThis project has been a significant milestone in my career. Working with the HoloViz team has not only broadened my technical expertise but also inspired me to continue exploring and utilizing these tools. I am excited to integrate HoloViz and its associated libraries into my future personal and professional data science endeavors.\nThis project has been an enriching experience, providing both challenges and opportunities for growth. The skills and knowledge gained will undoubtedly influence my future work, and I am grateful for the chance to contribute to such a dynamic and innovative project.\n\n\nDemetris and Maxime’s reflections\nWe want to thank Jason and Isaiah for the incredible effort they put into this project. As early-career developers, they took on a complex task—working with HoloViz’s expansive ecosystem—and did a great job making meaningful contributions. It’s not easy to navigate so many tools, APIs, and evolving documentation, but they approached the challenge with curiosity and determination.\nAlong the way, they helped us identify gaps in our APIs and brought fresh perspectives to our discussions about user experience. Their insights sparked conversations that led to improvements not just in the examples, but across HoloViz tools. We also appreciated their patience and adaptability as we all worked together to smooth out the edges of our first project funded by a NumFocus grant.\nThis collaboration wasn’t just about the examples—they’ve made lasting contributions to the ecosystem and the community. We’re excited to see where their journeys take them next. Jason and Isaiah, thank you for your hard work!"
},
{
"objectID": "posts/examples-website-modernization/index.html#footnotes",
"href": "posts/examples-website-modernization/index.html#footnotes",
"title": "HoloViz Examples Gallery Modernization",
"section": "Footnotes",
"text": "Footnotes\n\n\nanaconda-project is a project management tool created in 2016 and that predates most (if not all!) of the tools of this type like Poetry, Hatch, PDM, Pixi, conda-project, etc. It is no longer maintained and we do not recommend adopting it, some day we’ll migrate to another tool.↩︎"
},
{
"objectID": "index.html",
"href": "index.html",
"title": "HoloViz Blog",
"section": "",
"text": "Lumen AI Announcement\n\n\n\n\n\n\nannouncement\n\n\nlumen\n\n\n\nAnnouncing the release of Lumen AI\n\n\n\n\n\nJan 7, 2025\n\n\nPhilipp Rudiger & Andrew Huang\n\n\n\n\n\n\n\n\n\n\n\n\nHoloViz Examples Gallery Modernization\n\n\n\n\n\n\nannouncement\n\n\n\nAnnouncement of the modernized version of the HoloViz examples gallery, a curated collection of domain-specific narrative examples using various HoloViz projects.\n\n\n\n\n\nDec 11, 2024\n\n\nJunshen Tao, Isaiah Akorita, Demetris Roumis, Maxime Liquet\n\n\n\n\n\n\n\n\n\n\n\n\nHoloViews 1.20 - A year in review\n\n\n\n\n\n\nholoviews\n\n\nrelease\n\n\n\nRelease announcement for HoloViews 1.20\n\n\n\n\n\nDec 11, 2024\n\n\nSimon Hansen\n\n\n\n\n\n\n\n\n\n\n\n\nPlotting made easy with hvPlot: 0.11 release\n\n\n\n\n\n\nrelease\n\n\nhvplot\n\n\n\nRelease announcement for hvPlot 0.11, including: DuckDB integration, automatic lat/lon conversion on tiled maps, subcoordinate-y axis support, and more!\n\n\n\n\n\nSep 27, 2024\n\n\nMaxime Liquet\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 1.5.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 1.5\n\n\n\n\n\nSep 13, 2024\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPlotting made easy with hvPlot: 0.9 and 0.10 releases\n\n\n\n\n\n\nrelease\n\n\nhvplot\n\n\n\nRelease announcement for hvPlot 0.9 and 0.10, including: Polars integration, Xarray support added to the Explorer, Large timeseries exploration made easier, and more!\n\n\n\n\n\nMay 6, 2024\n\n\nMaxime Liquet\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 1.4.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 1.4\n\n\n\n\n\nMar 28, 2024\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nHoloViews Streams for Exploring Multidimensional Data\n\n\n\n\n\n\nholoviews\n\n\nstreams\n\n\n\nExplores a 4D dataset (time, level, lat, lon) dataset using HoloViews and Panel.\n\n\n\n\n\nMar 20, 2024\n\n\nAndrew Huang\n\n\n\n\n\n\n\n\n\n\n\n\nEvaluate and filter LLM output using logprobs & colored text\n\n\n\n\n\n\nshowcase\n\n\npanel\n\n\nai\n\n\nllm\n\n\nchatbot\n\n\nopenai\n\n\n\nHave you ever wanted to evaluate the confidence of LLM’s output? Utilize log probabilities!\n\n\n\n\n\nFeb 5, 2024\n\n\nAndrew Huang\n\n\n\n\n\n\n\n\n\n\n\n\nPanel AI Chatbot Tips: Memory and Downloadable Conversations\n\n\n\n\n\n\nshowcase\n\n\npanel\n\n\nai\n\n\nllm\n\n\nchatbot\n\n\n\nIn this blog post, we’ll explore how to build a simple AI chatbot, enhance it with memory capabilities, and finally, implement a feature to download conversations for further fine-tuning.\n\n\n\n\n\nDec 22, 2023\n\n\nAndrew Huang, Sophia Yang\n\n\n\n\n\n\n\n\n\n\n\n\nBuild an AI Chatbot to Run Code and Tweak plots\n\n\n\n\n\n\nshowcase\n\n\npanel\n\n\nai\n\n\nllm\n\n\nchatbot\n\n\n\nPowered by Panel and Mixtral 8x7B\n\n\n\n\n\nDec 22, 2023\n\n\nAndrew Huang, Sophia Yang\n\n\n\n\n\n\n\n\n\n\n\n\nParam 2.0 release\n\n\n\n\n\n\nrelease\n\n\nparam\n\n\n\nRelease announcement for Param 2.0\n\n\n\n\n\nDec 22, 2023\n\n\nMaxime Liquet\n\n\n\n\n\n\n\n\n\n\n\n\nBuild a Mixtral Chatbot with Panel\n\n\n\n\n\n\nshowcase\n\n\npanel\n\n\nai\n\n\nllm\n\n\nchatbot\n\n\n\nWith Mistral API, Transformers, and llama.cpp\n\n\n\n\n\nDec 13, 2023\n\n\nAndrew Huang, Philipp Rudiger, Sophia Yang\n\n\n\n\n\n\n\n\n\n\n\n\nBuild a RAG chatbot to answer questions about Python libraries\n\n\n\n\n\n\nshowcase\n\n\npanel\n\n\n\nAccess the Python universe with Fleet Context and Panel\n\n\n\n\n\nDec 7, 2023\n\n\nAndrew Huang, Sophia Yang\n\n\n\n\n\n\n\n\n\n\n\n\nReviving the blog with Quarto\n\n\n\n\n\n\nannouncement\n\n\n\nAnnouncing the migration of our blog to Quarto.\n\n\n\n\n\nNov 19, 2023\n\n\nMaxime Liquet\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 1.3.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 1.3\n\n\n\n\n\nOct 24, 2023\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nBuilding custom Panel widgets using ReactiveHTML\n\n\n\n\n\n\nshowcase\n\n\npanel\n\n\n\nBuilding custom Panel widgets using ReactiveHTML\n\n\n\n\n\nAug 17, 2023\n\n\nAndrew Huang, Sophia Yang\n\n\n\n\n\n\n\n\n\n\n\n\nHoloViz Survey Results\n\n\n\n\n\n\nsurvey\n\n\n\nResults from first HoloViz user survey\n\n\n\n\n\nJul 14, 2023\n\n\nDemetris Roumis\n\n\n\n\n\n\n\n\n\n\n\n\nBuilding an interactive ML dashboard in Panel\n\n\n\n\n\n\nshowcase\n\n\npanel\n\n\n\nBuilding an interactive ML dashboard in Panel\n\n\n\n\n\nJun 6, 2023\n\n\nAndrew Huang, Sophia Yang, Philipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 1.0 RC\n\n\n\n\n\n\nannouncement\n\n\npanel\n\n\n\nAnnouncing the availability of a Panel 1.0 release candidate\n\n\n\n\n\nApr 28, 2023\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 0.14.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 0.14\n\n\n\n\n\nOct 5, 2022\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nhvPlot 0.8.0 Release\n\n\n\n\n\n\nrelease\n\n\nhvplot\n\n\n\nRelease announcement for hvPlot 0.8.0\n\n\n\n\n\nAug 25, 2022\n\n\nMaxime Liquet\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 0.13.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 0.13\n\n\n\n\n\nMar 18, 2022\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 0.12.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 0.12\n\n\n\n\n\nJul 19, 2021\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nDatashader 0.13 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Datashader 0.13\n\n\n\n\n\nJun 23, 2021\n\n\nJames A. Bednar\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 0.11.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 0.11\n\n\n\n\n\nFeb 3, 2021\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 0.10.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 0.10\n\n\n\n\n\nOct 22, 2020\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nHoloViews 1.13 Release\n\n\n\n\n\n\nrelease\n\n\nholoviews\n\n\n\nRelease announcement for HoloViews 1.13\n\n\n\n\n\nJun 15, 2020\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 0.8.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 0.8\n\n\n\n\n\nJan 31, 2020\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPanel 0.7.0 Release\n\n\n\n\n\n\nrelease\n\n\npanel\n\n\n\nRelease announcement for Panel 0.7\n\n\n\n\n\nNov 18, 2019\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nPyViz at SciPy 2019\n\n\n\n\n\n\nscipy\n\n\n\nDiscussion about PyViz landscape at SciPy 2019 BoF\n\n\n\n\n\nJul 12, 2019\n\n\nJames A. Bednar, Thomas Caswell\n\n\n\n\n\n\n\n\n\n\n\n\nPyViz.org and HoloViz.org\n\n\n\n\n\n\nannouncement\n\n\n\nAnnouncing HoloViz splitting off from PyViz\n\n\n\n\n\nJul 2, 2019\n\n\nJames A. Bednar\n\n\n\n\n\n\n\n\n\n\n\n\nPanel Announcement\n\n\n\n\n\n\nannouncement\n\n\npanel\n\n\n\nPublic Announcement of the Panel library\n\n\n\n\n\nMay 28, 2019\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nhvPlot Announcement\n\n\n\n\n\n\nannouncement\n\n\nhvplot\n\n\n\nAnnouncing the release of hvPlot\n\n\n\n\n\nJan 31, 2019\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nGeoViews 1.5 Release\n\n\n\n\n\n\nrelease\n\n\ngeoviews\n\n\n\nRelease announcement for GeoViews 1.5\n\n\n\n\n\nMay 14, 2018\n\n\nPhilipp Rudiger\n\n\n\n\n\n\n\n\n\n\n\n\nHoloViews 1.10 Release\n\n\n\n\n\n\nrelease\n\n\nholoviews\n\n\n\nRelease announcement for HoloViews 1.10\n\n\n\n\n\nApr 24, 2018\n\n\nPhilipp Rudiger\n\n\n\n\n\n\nNo matching items\n\n\n \n\n Back to top"
},
{
"objectID": "posts/ai_chatbot_tips_memory_download/index.html",
"href": "posts/ai_chatbot_tips_memory_download/index.html",
"title": "Panel AI Chatbot Tips: Memory and Downloadable Conversations",
"section": "",
"text": "In this blog post, we’ll explore how to build a simple AI chatbot, enhance it with memory capabilities, and finally, implement a feature to download conversations for further fine-tuning.\nWe will cover:\nBefore we get started, let’s first make sure we install needed packages like panel, mistralai, openai in our Python environment and save our API keys as environment variables:\nexport MISTRAL_API_KEY=\"TYPE YOUR API KEY\"\nexport OPENAI_API_KEY=\"TYPE YOUR API KEY\""
},
{
"objectID": "posts/ai_chatbot_tips_memory_download/index.html#mistral-models",
"href": "posts/ai_chatbot_tips_memory_download/index.html#mistral-models",
"title": "Panel AI Chatbot Tips: Memory and Downloadable Conversations",
"section": "Mistral models",
"text": "Mistral models\nIn this blog post, we will only use the Mistral API. If you are interested in using Mistral models locally, check out our previous blog post Build a Mixtral Chatbot with Panel to see how we used Mistral API, transformers, llama.cpp, and Panel to create AI chatbots that use the Mixtral 8x7B Instruct model.\nWhen we do not need to keep our conversation history, we are only sending one round of user message to model. Thus, in this example, the messages that get sent to the model are defined as [ChatMessage(role=\"user\", content=contents)].\nimport os\nimport panel as pn\nfrom mistralai.async_client import MistralAsyncClient\nfrom mistralai.models.chat_completion import ChatMessage\n\npn.extension()\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n model = \"mistral-small\"\n messages = [\n ChatMessage(role=\"user\", content=contents)\n ]\n response = client.chat_stream(model=model, messages=messages)\n\n message = \"\"\n async for chunk in response:\n part = chunk.choices[0].delta.content\n if part is not None:\n message += part\n yield message\n\n\nclient = MistralAsyncClient(api_key=os.environ[\"MISTRAL_API_KEY\"])\nchat_interface = pn.chat.ChatInterface(callback=callback, callback_user=\"Mixtral\")\nchat_interface.send(\n \"Send a message to get a reply from Mixtral!\", user=\"System\", respond=False\n)\nchat_interface.servable()"
},
{
"objectID": "posts/ai_chatbot_tips_memory_download/index.html#openai-models",
"href": "posts/ai_chatbot_tips_memory_download/index.html#openai-models",
"title": "Panel AI Chatbot Tips: Memory and Downloadable Conversations",
"section": "OpenAI models",
"text": "OpenAI models\nThe code of using OpenAI models looks very similar. We are using OpenAI’s API with async/await to use the asynchronous client. To use async, we simply import AsyncOpenAI instead of OpenAI and add await with the API call.\nimport panel as pn\nfrom openai import AsyncOpenAI\n\npn.extension()\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n messages = [{\"role\": \"user\", \"content\": contents}]\n response = await aclient.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=messages,\n stream=True,\n )\n message = \"\"\n async for chunk in response:\n part = chunk.choices[0].delta.content\n if part is not None:\n message += part\n yield message\n\n\naclient = AsyncOpenAI()\nchat_interface = pn.chat.ChatInterface(callback=callback, callback_user=\"ChatGPT\")\nchat_interface.send(\n \"Send a message to get a reply from ChatGPT!\", user=\"System\", respond=False\n)\nchat_interface.servable()"
},
{
"objectID": "posts/ai_chatbot_tips_memory_download/index.html#mistral-models-1",
"href": "posts/ai_chatbot_tips_memory_download/index.html#mistral-models-1",
"title": "Panel AI Chatbot Tips: Memory and Downloadable Conversations",
"section": "Mistral models",
"text": "Mistral models\n\nimport os\nimport panel as pn\nfrom mistralai.async_client import MistralAsyncClient\nfrom mistralai.models.chat_completion import ChatMessage\n\npn.extension()\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n model = \"mistral-small\"\n messages = [\n ChatMessage(**message)\n for message in instance.serialize()[1:]\n ]\n response = client.chat_stream(model=model, messages=messages)\n\n message = \"\"\n async for chunk in response:\n part = chunk.choices[0].delta.content\n if part is not None:\n message += part\n yield message\n\n\nclient = MistralAsyncClient(api_key=os.environ[\"MISTRAL_API_KEY\"])\nchat_interface = pn.chat.ChatInterface(callback=callback, callback_user=\"Mixtral\")\nchat_interface.send(\n \"Send a message to get a reply from Mixtral!\", user=\"System\", respond=False\n)\nchat_interface.servable()\nHere in this example, the model indeed knows what we were talking about previously."
},
{
"objectID": "posts/ai_chatbot_tips_memory_download/index.html#openai-models-1",
"href": "posts/ai_chatbot_tips_memory_download/index.html#openai-models-1",
"title": "Panel AI Chatbot Tips: Memory and Downloadable Conversations",
"section": "OpenAI models",
"text": "OpenAI models\nThe code for OpenAI models is even simpler. Simply change messages to instance.serialize()[1:], you will send all the chat history except for the first message to OpenAI API.\nimport panel as pn\nfrom openai import AsyncOpenAI\n\npn.extension()\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n messages = instance.serialize()[1:]\n response = await aclient.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=messages,\n stream=True,\n )\n message = \"\"\n async for chunk in response:\n part = chunk.choices[0].delta.content\n if part is not None:\n message += part\n yield message\n\n\naclient = AsyncOpenAI()\nchat_interface = pn.chat.ChatInterface(callback=callback, callback_user=\"ChatGPT\")\nchat_interface.send(\n \"Send a message to get a reply from ChatGPT!\", user=\"System\", respond=False\n)\nchat_interface.servable()"
},
{
"objectID": "posts/ai_chatbot_tips_memory_download/index.html#mistral-models-2",
"href": "posts/ai_chatbot_tips_memory_download/index.html#mistral-models-2",
"title": "Panel AI Chatbot Tips: Memory and Downloadable Conversations",
"section": "Mistral models",
"text": "Mistral models\nWhat we are adding here the file_download widget. When we click this button, it will execute the download_history function, which just dump our chat history (chat_interface.serialize()) into a json file and save into the history.json file.\nThe output is a well-formatted json file that can easily be used for future model fine-tuning.\nimport os\nimport panel as pn\nfrom mistralai.async_client import MistralAsyncClient\nfrom mistralai.models.chat_completion import ChatMessage\nfrom io import StringIO\nimport json\n\npn.extension()\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n model = \"mistral-small\"\n messages = [\n ChatMessage(**message)\n for message in instance.serialize()[1:]\n ]\n print(messages)\n response = client.chat_stream(model=model, messages=messages)\n\n message = \"\"\n async for chunk in response:\n part = chunk.choices[0].delta.content\n if part is not None:\n message += part\n yield message\n\ndef download_history():\n buf = StringIO()\n json.dump(chat_interface.serialize(), buf)\n buf.seek(0)\n return buf\n\nfile_download = pn.widgets.FileDownload(\n callback=download_history, filename=\"history.json\"\n)\nheader = pn.Row(pn.HSpacer(), file_download)\n\n\nclient = MistralAsyncClient(api_key=os.environ[\"MISTRAL_API_KEY\"])\nchat_interface = pn.chat.ChatInterface(\n callback=callback, \n callback_user=\"Mixtral\",\n header=header\n )\nchat_interface.send(\n \"Send a message to get a reply from Mixtral!\", user=\"System\", respond=False\n)\nchat_interface.servable()"
},
{
"objectID": "posts/ai_chatbot_tips_memory_download/index.html#openai-models-2",
"href": "posts/ai_chatbot_tips_memory_download/index.html#openai-models-2",
"title": "Panel AI Chatbot Tips: Memory and Downloadable Conversations",
"section": "OpenAI models",
"text": "OpenAI models\nAdding exactly the same code, we can also easily download all conversation with OpenAI models:\nimport panel as pn\nfrom openai import AsyncOpenAI\nfrom io import StringIO\nimport json\n\npn.extension()\n\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n messages = instance.serialize()[1:]\n response = await aclient.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=messages,\n stream=True,\n )\n message = \"\"\n async for chunk in response:\n part = chunk.choices[0].delta.content\n if part is not None:\n message += part\n yield message\n\ndef download_history():\n buf = StringIO()\n json.dump(chat_interface.serialize(), buf)\n buf.seek(0)\n return buf\n\nfile_download = pn.widgets.FileDownload(\n callback=download_history, filename=\"history.json\"\n)\nheader = pn.Row(pn.HSpacer(), file_download)\n\naclient = AsyncOpenAI()\nchat_interface = pn.chat.ChatInterface(\n callback=callback, \n callback_user=\"ChatGPT\",\n header=header\n )\nchat_interface.send(\n \"Send a message to get a reply from ChatGPT!\", user=\"System\", respond=False\n)\nchat_interface.servable()"
},
{
"objectID": "posts/mixtral/index.html",
"href": "posts/mixtral/index.html",
"title": "Build a Mixtral Chatbot with Panel",
"section": "",
"text": "Mistral AI just announced the Mixtral 8x7B and the Mixtral 8x7B Instruct models. These models have shown really amazing performance, outperforming Llama 2 and GPT 3.5 in many benchmarks. They’ve quickly became the most popular open weights models in the AI world. In this blog post, we will walk you through how to build AI chatbots with the Mixtral 8x7B Instruct model using the Panel chat interface. We will cover three methods:"
},
{
"objectID": "posts/mixtral/index.html#build-a-panel-chatbot",
"href": "posts/mixtral/index.html#build-a-panel-chatbot",
"title": "Build a Mixtral Chatbot with Panel",
"section": "Build a Panel chatbot",
"text": "Build a Panel chatbot\nBefore we build a Panel chatbot, let’s make sure we install mistralai and panel in our Python environment and set up Mistal API key as an environment variable: export MISTRAL_API_KEY=\"TYPE YOUR KEY\".\n\nWe wrap the code above in a function callback.\nThe key to building a Panel chatbot is to define pn.chat.ChatInterface. Specifically, in the callback method, we need to define how the chat bot responds to user message – the callback function.\nTo turn a Python file or a notebook into a deployable app, simply append .servable() to the Panel object chat_interface.\n\n\"\"\"\nDemonstrates how to use the `ChatInterface` to create a chatbot using\nMistral API.\n\"\"\"\nimport os\nimport panel as pn\nfrom mistralai.client import MistralClient\nfrom mistralai.models.chat_completion import ChatMessage\n\npn.extension()\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n\n model = \"mistral-small\"\n messages = [ChatMessage(role=\"user\", content=contents)]\n response = client.chat_stream(model=model, messages=messages)\n \n message = \"\"\n for chunk in response:\n part = chunk.choices[0].delta.content\n if part is not None:\n message += part\n yield message\n\n\nclient = MistralClient(api_key=os.environ[\"MISTRAL_API_KEY\"])\nchat_interface = pn.chat.ChatInterface(callback=callback, callback_user=\"Mixtral\")\nchat_interface.send(\n \"Send a message to get a reply from Mixtral!\", user=\"System\", respond=False\n)\nchat_interface.servable()\nTo launch a server using CLI and interact with this app, simply run panel serve app.py and you can interact with the model:"
},
{
"objectID": "posts/mixtral/index.html#build-a-panel-chatbot-1",
"href": "posts/mixtral/index.html#build-a-panel-chatbot-1",
"title": "Build a Mixtral Chatbot with Panel",
"section": "Build a Panel chatbot",
"text": "Build a Panel chatbot\nSame as what we saw in Method 1, we wrap the code above in a function callback, and define the callback in the pn.chat.ChatInterface function:\nimport panel as pn\nfrom transformers import AutoTokenizer, TextStreamer\nimport transformers\nimport torch\n\npn.extension()\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n messages = [{\"role\": \"user\", \"content\": contents}]\n prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n streamer = TextStreamer(tokenizer, skip_prompt=True)\n outputs = pipeline(prompt, streamer=streamer, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)\n message = \"\"\n for token in outputs[0][\"generated_text\"]:\n message += token\n yield message\n \nmodel = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\n\ntokenizer = AutoTokenizer.from_pretrained(model)\npipeline = transformers.pipeline(\n \"text-generation\",\n model=model,\n model_kwargs={\"torch_dtype\": torch.float16, \"load_in_4bit\": True},\n)\nchat_interface = pn.chat.ChatInterface(callback=callback, callback_user=\"Mixtral\")\nchat_interface.send(\n \"Send a message to get a reply from Mixtral!\", user=\"System\", respond=False\n)\nchat_interface.servable()\nRun panel serve app.py in CLI to interact with this app. Here is an example of our interaction with the model:"
},
{
"objectID": "posts/mixtral/index.html#set-up",
"href": "posts/mixtral/index.html#set-up",
"title": "Build a Mixtral Chatbot with Panel",
"section": "Set up",
"text": "Set up\nFirst, we need to download llama-cpp-python, which is a Python binding for llama.cpp. Depending on your computer, the steps to install it might look different. Since we are using a Macbook M1 Pro with a Metal GPU. Here are the steps to install llama-cpp-python with Metal: https://llama-cpp-python.readthedocs.io/en/latest/install/macos/. Here is what I did:\n!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python\nSecond, let’s download the 4-bit quantized version of the Mixtral-8x7B-Instruct model form Hugging Face. Note that this file is quite big, about 26GB.\nwget https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/mixtral-8x7b-instruct-v0.1.Q4_0.gguf\nBecause Mixtral is not merged in llama.cpp yet, we need to do the following steps.\n# REF: https://github.com/abetlen/llama-cpp-python/issues/1000\ngit clone https://github.com/ggerganov/llama.cpp\ncd llama.cpp\ngit checkout mixtral\nmake -j\nmake libllama.so\nFinally, don’t forget to install the other needed packages such as transformers and panel."
},
{
"objectID": "posts/mixtral/index.html#run-mixtral",
"href": "posts/mixtral/index.html#run-mixtral",
"title": "Build a Mixtral Chatbot with Panel",
"section": "Run Mixtral",
"text": "Run Mixtral\nBelow is the Python code for running Mixtral with llama.cpp. Here are the steps:\n\nWe first need to define an environment variable LLAMA_CPP_LIB directed to the libllama.so file, which is saved under the llama.cpp directly we got from git clone earlier.\nThen we define our llm pointing to the mixtral-8x7b-instruct-v0.1.Q4_0.gguf file we downloaded from wget.\nNote that we need to load the tokenizer from the Mixtral-8x7B-Instruct model and format the input text the way the model expects.\nThen we can get responses from llm.create_completion. The default max_tokens is 16. To get reasonable good responses, let’s increase this number to 256.\n\nimport os\nos.environ[\"LLAMA_CPP_LIB\"] = \"/PATH WHERE YOU SAVED THE llama.cpp DIRECTORY FROM GIT CLONE/llama.cpp/libllama.so\"\n\nfrom llama_cpp import Llama\nfrom transformers import AutoTokenizer\n\nllm = Llama(\n model_path=\"./mixtral-8x7b-instruct-v0.1.Q4_0.gguf\",\n n_gpu_layers=0,\n)\n\nmodel = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\ntokenizer = AutoTokenizer.from_pretrained(model)\nmessages = [{\"role\": \"user\", \"content\": \"Explain what a Mixture of Experts is in less than 100 words.\"}]\nprompt = tokenizer.apply_chat_template(messages, tokenize=False)\n\nresponse = llm.create_completion(prompt, max_tokens=256)\nresponse['choices'][0]['text']\nHere you can see the code running in Jupyter Notebook cells. Please be patient as this will take some time. After a few minutes, the model outputs results based on our input prompt:"
},
{
"objectID": "posts/mixtral/index.html#build-a-panel-chatbot-2",
"href": "posts/mixtral/index.html#build-a-panel-chatbot-2",
"title": "Build a Mixtral Chatbot with Panel",
"section": "Build a Panel chatbot",
"text": "Build a Panel chatbot\n\nSame as what we have seen before, let’s wrap the code logic above in a function called callback, which is how we want our chatbot to respond to user messages.\nThen in pn.chat.ChatInterface, we define callback as this callback function.\n\nimport os\nos.environ[\"LLAMA_CPP_LIB\"] = \"/PATH WHERE YOU SAVED THE llama.cpp DIRECTORY FROM GIT CLONE/llama.cpp/libllama.so\"\n\nfrom llama_cpp import Llama\nfrom transformers import AutoTokenizer\nimport panel as pn\n\npn.extension()\n\nasync def callback(contents: str, user: str, instance: pn.chat.ChatInterface):\n\n messages = [{\"role\": \"user\", \"content\": contents}]\n prompt = tokenizer.apply_chat_template(messages, tokenize=False)\n response = llm.create_completion(prompt, max_tokens=256, stream=True)\n\n message = \"\"\n for chunk in response:\n message += chunk['choices'][0]['text']\n yield message\n \nmodel = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\ntokenizer = AutoTokenizer.from_pretrained(model)\nllm = Llama(\n model_path=\"./mixtral-8x7b-instruct-v0.1.Q4_0.gguf\",\n n_gpu_layers=0,\n)\nchat_interface = pn.chat.ChatInterface(\n callback=callback, \n callback_user=\"Mixtral\",\n message_params={\"show_reaction_icons\": False}\n )\nchat_interface.send(\n \"Send a message to get a reply from Mixtral!\", user=\"System\", respond=False\n)\nchat_interface.servable()\n\nFinally, we can run panel serve app.py to interact with this app. As you can see in this gif, it’s actually quite slow generating each word because we are running on a local Macbook."
},
{
"objectID": "posts/pyviz_scipy_bof_2019/index.html",
"href": "posts/pyviz_scipy_bof_2019/index.html",
"title": "PyViz at SciPy 2019",
"section": "",
"text": "The Python Data Visualization Birds-of-a-Feather session at the scientific Python conference brought together a dozen different authors of Python packages for visualizing data. Each author was asked to state one thing that they found exciting right now about Python data viz from their own perspective, along with another issue that they found frustrating or that needs attention. Panelists then voted on a few issues brought up in the introductions, and answered a variety of questions from the audience. Our notes from the meeting are below for all those interested."
},
{
"objectID": "posts/pyviz_scipy_bof_2019/index.html#panelist-introductions",
"href": "posts/pyviz_scipy_bof_2019/index.html#panelist-introductions",
"title": "PyViz at SciPy 2019",
"section": "Panelist introductions",
"text": "Panelist introductions\nJames A. Bednar (Panel, hvPlot, Datashader, Colorcet, GeoViews) Intro and overview of python viz landscape and overview of the new pyviz.org website, including live status of 60+ Python viz tools. Excited about dashboarding in Python – now a real thing that other languages should be jealous of! Frustrated by interoperability issues, from trying to assemble various libraries to solve big problems.\nThomas Caswell (Matplotlib/PyQtGraph) Diversity in PyViz libraries shows wide usage across domains; diversity is a feature, not a bug. No perfect solution for every domain. PyQtgraph - great for high speed desktop but please don’t try to do web dashboarding with it. Matplotlib is mostly in maintenance and housekeeping mode at the moment, but starting to think of what Matplotlib 4 should look like. PyQtgraph being revived after being dormant for a while; moving to py3 only release.\nJon Mease (Plotly) Independent contractor, speaking for the Python interface. Excited about V4 of Plotly, new themes, having it run in more places, integration of seaborn-style high-level API. Frustrated by the fact that the choice of library is often dictated by what interface you use—cmd line, Spyder, Jupyter, etc.\nMadicken Munk (yt) yt is trying to expand to other non-astronomy domains. Having a major release in the next year. Switching to external unit conversion system, from Nathan Goldbaum. Creating a domain context system to make it easier to integrate with new domains. Frustrating is that it is difficult to separate out domain-specific stuff. yt naming still very astro-specific, even though the functionality is largely domain agnostic. yt maintains Jupyter widget library built on rust compiled to WebAssembly.\nJosef Heinen (GR) GR focuses on speed and transparency, used in Matplotlib but language agnostic, which is useful for scientists working with multiple languages. Can be integrated into QT, GTK, etc. Currently transpiling software into JS which will allow browser use and will enable matplotlib browser web backend. He personally prefers Julia.\nJean-Luc Stevens (HoloViews) HoloViews is a layer/API on top of other libraries (Matplotlib, Bokeh, Plotly). Focuses on exploratory work. Working on maturing, polishing, and further documenting the system, which is now used as a lower-level base for other libraries like hvPlot. Frustration - fragmented, quickly moving ecosystem makes integration difficult.\nDavid Hoese (VisPy) Python wrapper around OpenGL. High level interface. Excited about improved number of contributors to VisPy. Frustrations 1. Platform support for OpenGL – Apple has dropped OS X support! 2. Backwards compatibility concerns makes it difficult to maintain VisPy – new features are hard to support without breaking support for older standards.\nFilipe Fernandes (Folium) Folium is widely used, but is not a healthy project. Only use folium if you are already on it. Will be discontinued in 2-3 years. For new projects use alternatives, e.g. ipyleaflet.\nThomas Robitaille (Glue) Glue provides multidimensional analysis. Uses other packages for viz. Excited about Jupyter ecosystem to dashboards to desktop apps. Frustrated by the number of viz packages; not sure which to use and which to contribute missing functionality to. It’s, especially difficult when they all have such different governance models; it’s not always clear which ones you can have impact on, so it is difficult to invest in substantial efforts.\nMatthew McCormick (VTK) Will update pyviz.org soon with more information. 2D/3D spatial viz. New version vtkjs supports WebGL. Provides volume rendering in the browser, now with Jupyter widgets. Would like to see - lots of progress in packaging, but need dashboarding tools and Qt and be able to create single-file applications using things like pyinstaller.\nMartin Renou (ipywidgets) Pushing widget libraries outside of Python into C++. Also Voila library for dashboarding. Open issue if you are unable to convert notebook to dashboard.\nJulia Signell (Bokeh) Exciting: very stable now, much nicer than it was a few years ago. Please try again if you previously found rough edges. Bokeh is not supported by only one company, with widely spread developers, NumFocus support, and a completely open model. Go to discourse.bokeh.org if you want to get involved.\nBrian Granger (Jupyter/Altair) (based on Vega/Vegaliite). Excited about the impact seen of having a declarative viz grammar. Key lesson learned: start with data model and not the API. Enables building bindings for different languages, and offers lessons for other viz libs. Frustrating - packaging, not just Python and C, but JS is also involved. Really challenging issues.\nJake Vanderplas (pdVega, Altair) Excited about the ongoing efforts in pandas to expand its plotting API to target backends beyond matplotlib (pandas#14130)."
},
{
"objectID": "posts/pyviz_scipy_bof_2019/index.html#votes",
"href": "posts/pyviz_scipy_bof_2019/index.html#votes",
"title": "PyViz at SciPy 2019",
"section": "Votes",
"text": "Votes\nVote: Raise your hand if your project is truly ready and willing to accept substantial community contributions. (All voted yes!)\nVote: Are there too many viz libraries? (2-5 voted yes, depending on caveats)"
},
{
"objectID": "posts/pyviz_scipy_bof_2019/index.html#audience-questions",
"href": "posts/pyviz_scipy_bof_2019/index.html#audience-questions",
"title": "PyViz at SciPy 2019",
"section": "Audience Questions:",
"text": "Audience Questions:\n1. I work a lot in web dev; what is the state of PyViz libraries ability to make viz accessible?\nFor dashboarding libraries like Voila and Panel, some aspects are currently only solved at the JS/HTML level, by using a responsive template that supports mobile devices, larger fonts for low vision, etc. Most of those issues have not been taken on by the Python packages directly. Even when using such a template, it is up to the users to use good colors, etc., though Colorcet and Viscm offer good colorblind-safe colormaps that can help. Textual summary of graphical representations is an open area. Guides to making viz accessible would be an excellent addition to PyViz.org. Suggestion from Marinna Martini: www.w3.org/WAI.\n2. What are good Python options for displaying real-time data from sensors?\nPyQtGraph was designed for this use case, providing high frame rates for many sensors. VisPy is also great for this, with examples in the repo for how to do this using various choices of backend. Bokeh Spectrogram example is good, though not quite as high performance as native GUI systems. GR is also an option, with examples in documentation. HoloViews has a streaming data guide and integrates with Streamz lib for easy plotting of streaming data sources, using Datashader when needed for large datasets. VTK based tools for images/point sets have fairly good support for real-time usage. Plotly.py with Dash and Matplotlib also have ways to do this.\n3. What support is available for Dask and CuPy data structures?\nVisPy has to convert to Numpy first. hvPlot works with Dask arrays and dataframes directly.\n4. Is there support for CMYK-safe colormaps, so that figures are perceptually uniform when printed, e.g. on conference posters?\nNot that anyone on the panel is aware.\n5. Is there anyone fighting against the emerging consensus around tidy dataframes as input structure, which is annoying in practice after investing in well-structured multi-indexes?\nhvPlot supports wide data formats, though not currently multi-indexes directly. Altair assumes a tidy dataframe. Altair only covers a small subset of viz space, with a complex SQL-like pipeline, and hence needs a constrained data format. Altair may need helper tools to convert to tidy formats. hvPlot is a good example of this approach; it’s a high-level wrapper that works well with wide (non-tidy) data formats, converting it to the tidy format expected by HoloViews.\n6.Will there be changes to backend for alternate display formats, etc. in Matplotlib?\nShort answer: yes. Long answer: still in planning stages. Better export paths are needed that are more semantic that can go to Bokeh and Altair etc. We need many more libraries that are wrappers on the main libraries. Building those helper libraries needs to be easy and simple to spin up."
},
{
"objectID": "posts/tweak-mpl-chat/index.html",
"href": "posts/tweak-mpl-chat/index.html",
"title": "Build an AI Chatbot to Run Code and Tweak plots",
"section": "",
"text": "Have you wasted hours tweaking a plot for a presentation or academic paper, like searching StackOverflow on how to change the font size of the labels? The future is now; let LLMs improve your plots for you!\nIn this blog post, we will build an AI chatbot with Panel and Mixtral 8x7b that will help you generate code and execute code to tweak a Matplotlib plot. It has two functionalities:"
},
{
"objectID": "posts/tweak-mpl-chat/index.html#step-0-import-packages",
"href": "posts/tweak-mpl-chat/index.html#step-0-import-packages",
"title": "Build an AI Chatbot to Run Code and Tweak plots",
"section": "Step 0: Import packages",
"text": "Step 0: Import packages\nNow let’s move on to the actual code. Make sure you install the required packages panel and mistralai in your Python environment and import the needed packages:\nimport re\nimport os\nimport panel as pn\nfrom mistralai.async_client import MistralAsyncClient\nfrom mistralai.models.chat_completion import ChatMessage\nfrom panel.io.mime_render import exec_with_return\n\npn.extension(\"codeeditor\", sizing_mode=\"stretch_width\")"
},
{
"objectID": "posts/tweak-mpl-chat/index.html#step-1-define-default-behaviors",
"href": "posts/tweak-mpl-chat/index.html#step-1-define-default-behaviors",
"title": "Build an AI Chatbot to Run Code and Tweak plots",
"section": "Step 1: Define default behaviors",
"text": "Step 1: Define default behaviors\nHere is the code for this step, we can define the following:\n\nThe LLM model we would like to use: LLM_MODEL=\"mistral-small\"\nThe system message:\n\nYou are a renowned data visualization expert\nwith a strong background in matplotlib.\nYour primary goal is to assist the user\nin edit the code based on user request\nusing best practices. Simply provide code \nin code fences (```python). You must have `fig`\nas the last line of code\n\nThe format of user content where we combine the user message the the current Python code.\nThe default Matplotlib plot that users see when they interact with the chatbot.\n\nFeel free to change any of these default settings according to your own use cases."
},
{
"objectID": "posts/tweak-mpl-chat/index.html#step-2-define-the-callback-function",
"href": "posts/tweak-mpl-chat/index.html#step-2-define-the-callback-function",
"title": "Build an AI Chatbot to Run Code and Tweak plots",
"section": "Step 2: Define the callback function",
"text": "Step 2: Define the callback function\nThis function defines how our chatbot responds to user messages. This code looks a little more complex than our examples in previous blog posts because the AI need to respond not only the text, but also the code. - We keep all the message history as a list in messages - When users send a message, we combine both the text of the message and the current state of the code from the code_editor widget (see Step 3) as add to the messages list. - We send all these messages to the Mistral model. - Then we extract Python code from the model output and update the Python code in code_editor.\nclient = MistralAsyncClient(api_key=os.environ[\"MISTRAL_API_KEY\"])\n\nasync def callback(content: str, user: str, instance: pn.chat.ChatInterface):\n # system\n messages = [SYSTEM_MESSAGE]\n\n # history\n messages.extend([ChatMessage(**message) for message in instance.serialize()[1:-1]])\n\n # new user contents\n user_content = USER_CONTENT_FORMAT.format(\n content=content, code=code_editor.value\n )\n messages.append(ChatMessage(role=\"user\", content=user_content))\n\n # stream LLM tokens\n message = \"\"\n async for chunk in client.chat_stream(model=LLM_MODEL, messages=messages):\n if chunk.choices[0].delta.content is not None:\n message += chunk.choices[0].delta.content\n yield message\n\n # extract code\n llm_code = re.findall(r\"```python\\n(.*)\\n```\", message, re.DOTALL)[0]\n if llm_code.splitlines()[-1].strip() != \"fig\":\n llm_code += \"\\nfig\"\n code_editor.value = llm_code"
},
{
"objectID": "posts/tweak-mpl-chat/index.html#step-3-define-widgets",
"href": "posts/tweak-mpl-chat/index.html#step-3-define-widgets",
"title": "Build an AI Chatbot to Run Code and Tweak plots",
"section": "Step 3: Define widgets",
"text": "Step 3: Define widgets\n\nChatInterface: Panel provides a built-in ChatInterface widget that provides a user-friendly front-end chatbot interface for various kinds of messages.callback points to the function that we defined in the last step. It executes when a user sends a message.\n\nchat_interface = pn.chat.ChatInterface(\n callback=callback,\n show_clear=False,\n show_undo=False,\n show_button_name=False,\n message_params=dict(\n show_reaction_icons=False,\n show_copy_icon=False,\n ),\n height=700,\n callback_exception=\"verbose\",\n)\n\nmatplotlib_pane is a Panel object that shows the Matplotlib plot from the Python code. How does execute Python code and return and return the plot? The secret is the exec_with_return function, which will executes a code snippet and returns the resulting output. By default, matplotlib_pane executes the default Matplotlib code we defined in Step 1.\n\nmatplotlib_pane = pn.pane.Matplotlib(\n exec_with_return(DEFAULT_MATPLOTLIB),\n sizing_mode=\"stretch_both\",\n tight=True,\n)\n\n\n\n\n\n\ncode_editor is another Panel object that allows embedding a code editor.\n\ncode_editor = pn.widgets.CodeEditor(\n value=DEFAULT_MATPLOTLIB,\n sizing_mode=\"stretch_both\",\n)\n\n\n\n\n\n\nHow does the plot get updated?\nWhenever the code changes, the plot gets updates. Specifically, the matplotlib_pane watches for the code changes in code_editor using the param.watch method.\n# watch for code changes\ndef update_plot(event):\n matplotlib_pane.object = exec_with_return(event.new)\ncode_editor.param.watch(update_plot, \"value\")\nSo when does the code get updated?\n\nWhenever the AI assistant outputs Python code, this Python code will become the new value of code_editor. This is defined in the callback function in Step 2.\nWhenever we change code directly in the code_editor, the code will change and the plot will update automatically."
},
{
"objectID": "posts/tweak-mpl-chat/index.html#step-4-define-layout",
"href": "posts/tweak-mpl-chat/index.html#step-4-define-layout",
"title": "Build an AI Chatbot to Run Code and Tweak plots",
"section": "Step 4: Define layout",
"text": "Step 4: Define layout\nFinally we can define how we’d like each widget to place in our app.\n# lay them out\ntabs = pn.Tabs(\n (\"Plot\", matplotlib_pane),\n (\"Code\", code_editor),\n)\n\nsidebar = [chat_interface]\nmain = [tabs]\ntemplate = pn.template.FastListTemplate(\n sidebar=sidebar,\n main=main,\n sidebar_width=600,\n main_layout=None,\n accent_base_color=\"#fd7000\",\n header_background=\"#fd7000\",\n)\ntemplate.servable()\nThen run panel serve app.py to launch a server using CLI and interact with this app."
},
{
"objectID": "posts/fleet_ai/index.html",
"href": "posts/fleet_ai/index.html",
"title": "Build a RAG chatbot to answer questions about Python libraries",
"section": "",
"text": "Interested in asking questions about Python’s latest and greatest libraries? This is the chatbot for you! Fleet Context offers 4M+ high-quality custom embeddings of the top 1000+ Python libraries, while Panel can provide a Chat Interface UI to build a Retrieval-Augmented Generation (RAG) chatbot with Fleet Context.\nWhy is this chatbot useful? It’s because most language models are not trained on the most up-to-date Python package docs and thus do not have information about the recent Python libraries like llamaindex, LangChain, etc. To be able to answer questions about these libraries, we can retrieve relevant information from Python library docs and generate valid and improved responses based on retrieved information.\nRun the app: https://huggingface.co/spaces/ahuang11/panel-fleet\nCode: https://huggingface.co/spaces/ahuang11/panel-fleet/tree/main"
},
{
"objectID": "posts/fleet_ai/index.html#command-line-interface",
"href": "posts/fleet_ai/index.html#command-line-interface",
"title": "Build a RAG chatbot to answer questions about Python libraries",
"section": "1. Command line interface",
"text": "1. Command line interface\nOnce we define the OpenAI environment variable export OPENAI_API_KEY=xxx, we can run context in the command line and start ask questions about Python libraries. For example, here I asked “what is HoloViz Panel?”. What I really like about Fleet is that it provides references for us to check."
},
{
"objectID": "posts/fleet_ai/index.html#python-console",
"href": "posts/fleet_ai/index.html#python-console",
"title": "Build a RAG chatbot to answer questions about Python libraries",
"section": "2. Python console",
"text": "2. Python console\nWe can query embeddings directly from the provided hosted vector database with the query method from the context library. When we ask a question “What is HoloViz Panel?”, it returned defined number (k=2) of related text chunks from the Panel docs.\nNote that the returned results include many metadata such as library_id, page_id, parent, section_id, title, text, type, etc., which are available for us to use and query."
},
{
"objectID": "posts/fleet_ai/index.html#import-packages",
"href": "posts/fleet_ai/index.html#import-packages",
"title": "Build a RAG chatbot to answer questions about Python libraries",
"section": "0. Import packages",
"text": "0. Import packages\nBefore we get started, let’s make sure we install the needed packages and import the packages:\nfrom context import query\nfrom openai import AsyncOpenAI\nimport panel as pn\npn.extension()"
},
{
"objectID": "posts/fleet_ai/index.html#define-the-system-prompt",
"href": "posts/fleet_ai/index.html#define-the-system-prompt",
"title": "Build a RAG chatbot to answer questions about Python libraries",
"section": "1. Define the system prompt",
"text": "1. Define the system prompt\nFull credit to the Fleet Context team, we took this system prompt and tweaked it a bit from their code:\n# taken from fleet context\nSYSTEM_PROMPT = \"\"\"\nYou are an expert in Python libraries. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.\nEach token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.\nYour users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.\nYour users are also in a CLI environment. You are capable of writing and running code. DO NOT write hypothetical code. ALWAYS write real code that will execute and run end-to-end.\nInstructions:\n- Be objective, direct. Include literal information from the context, don't add any conclusion or subjective information.\n- When writing code, ALWAYS have some sort of output (like a print statement). If you're writing a function, call it at the end. Do not generate the output, because the user can run it themselves.\n- ALWAYS cite your sources. Context will be given to you after the text ### Context source_url ### with source_url being the url to the file. For example, ### Context https://example.com/docs/api.html#files ### will have a source_url of https://example.com/docs/api.html#files.\n- When you cite your source, please cite it as [num] with `num` starting at 1 and incrementing with each source cited (1, 2, 3, ...). At the bottom, have a newline-separated `num: source_url` at the end of the response. ALWAYS add a new line between sources or else the user won't be able to read it. DO NOT convert links into markdown, EVER! If you do that, the user will not be able to click on the links.\nFor example:\n**Context 1**: https://example.com/docs/api.html#pdfs\nI'm a big fan of PDFs.\n**Context 2**: https://example.com/docs/api.html#csvs\nI'm a big fan of CSVs.\n### Prompt ###\nWhat is this person a big fan of?\n### Response ###\nThis person is a big fan of PDFs[1] and CSVs[2].\n1: https://example.com/docs/api.html#pdfs\n2: https://example.com/docs/api.html#csvs\n\"\"\""
},
{
"objectID": "posts/fleet_ai/index.html#define-chat-interface",
"href": "posts/fleet_ai/index.html#define-chat-interface",
"title": "Build a RAG chatbot to answer questions about Python libraries",
"section": "2. Define chat interface",
"text": "2. Define chat interface\nThe key component of defining a Panel chat interface is pn.chat.ChatInterface. Specifically, in the callback method, we need to define how the chat bot responds – the answer function.\nIn this function, we: - Initialize the system prompt - Used the Fleet Context query method to query k=3 relevant text chunks for our given question - We format the retrieved text chunks, URLs, and user message into the required OpenAI message format - We provide the message history into an OpenAI model. - Then we stream the responses asynchronously from OpenAI.\nasync def answer(contents, user, instance):\n # start with system prompt\n messages = [{\"role\": \"system\", \"content\": SYSTEM_PROMPT}]\n\n # add context to the user input\n context = \"\"\n fleet_responses = query(contents, k=3)\n for i, response in enumerate(fleet_responses):\n context += (\n f\"\\n\\n**Context {i}**: {response['metadata']['url']}\\n\"\n f\"{response['metadata']['text']}\"\n )\n instance.send(context, avatar=\"🛩️\", user=\"Fleet Context\", respond=False)\n\n # get history of messages (skipping the intro message)\n # and serialize fleet context messages as \"user\" role\n messages.extend(\n instance.serialize(role_names={\"user\": [\"user\", \"Fleet Context\"]})[1:]\n )\n\n openai_response = await client.chat.completions.create(\n model=MODEL, messages=messages, temperature=0.2, stream=True\n )\n\n message = \"\"\n async for chunk in openai_response:\n token = chunk.choices[0].delta.content\n if token:\n message += token\n yield message\n\n\nclient = AsyncOpenAI()\nintro_message = pn.chat.ChatMessage(\"Ask me anything about Python libraries!\", user=\"System\")\nchat_interface = pn.chat.ChatInterface(intro_message, callback=answer, callback_user=\"OpenAI\")"
},
{
"objectID": "posts/fleet_ai/index.html#format-everything-in-a-template",
"href": "posts/fleet_ai/index.html#format-everything-in-a-template",
"title": "Build a RAG chatbot to answer questions about Python libraries",
"section": "3. Format everything in a template",
"text": "3. Format everything in a template\nFinally we format everything in a template and run panel serve app.py in the command line to get the final app:\ntemplate = pn.template.FastListTemplate(\n main=[chat_interface], \n title=\"Panel UI of Fleet Context 🛩️\"\n)\ntemplate.servable()\n\n\n\nDemo of the Python Library Document RAG Chatbot\n\n\n\nNow, you should have a working AI chatbot that can answer questions about Python libraries. If you would like to add more complex RAG features. LlamaIndex has incorporated it into its system. Here is a guide if you would like to experiment Fleet Context with LlamaIndex: Fleet Context Embeddings - Building a Hybrid Search Engine for the Llamaindex Library."
},
{
"objectID": "posts/holoviews_streams/index.html",
"href": "posts/holoviews_streams/index.html",
"title": "HoloViews Streams for Exploring Multidimensional Data",
"section": "",
"text": "Follow along to build an app that uses a 4D dataset (level, time, lat, lon) and explore it by"
},
{
"objectID": "posts/holoviews_streams/index.html#basics",
"href": "posts/holoviews_streams/index.html#basics",
"title": "HoloViews Streams for Exploring Multidimensional Data",
"section": "Basics",
"text": "Basics\n\nImport the necessary libraries\nMost of the time, using Python is just knowing what’s out there and importing it!\n\nimport param\nimport numpy as np\nimport xarray as xr\nimport panel as pn\nimport hvplot.xarray\nimport geoviews as gv\nimport holoviews as hv\nfrom geoviews.streams import PolyDraw\nfrom metpy.interpolate import cross_section\nimport cartopy.crs as ccrs\n\npn.extension()\ngv.extension(\"bokeh\")\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGetting something working\nBelow I show three ways to download a piece of the NCEP Reanalysis dataset from NOAA.\nIt’s one of my favorite datasets for testing and writing examples because it’s so straightforward to use: - no API key required, which means no need to sign up, verify email, etc. - can be small or large, if 4X daily, concatenated across times, variables, etc - is multi-dimensional (time, level, lat, lon)\nBelow are three variations of downloading a dataset. Note, 1 only works in notebooks; 2 and 3 work in both notebooks and scripts.\nSince I usually work in a Jupyter notebook, I like to use 1 due to its simplicity–just a ! + wget + copied url and an optional --no-clobber, -nc flag.\n\n# 1.\n!wget -nc https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis/Dailies/pressure/air.2024.nc\n\n# 2.\n# import subprocess\n# subprocess.run(\"wget https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis/Dailies/pressure/air.2024.nc\", shell=True)\n\n# 3.\n# import requests\n# with requests.get(\"https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis/Dailies/pressure/air.2024.nc\") as response:\n# response.raise_for_status()\n# with open(\"air.2024.nc\", \"wb\") as f:\n# f.write(response.content)\n\nFile ‘air.2024.nc’ already there; not retrieving.\n\n\n\nThe hardest part for any projects is getting started (something about static friction > kinetic friction).\nHowever, once you get started, things get easier, so what I usually do is take baby steps and get something shown up front immediately.\nFortunately, XArray + hvPlot makes it possible!\n\nds = xr.open_dataset(\"air.2024.nc\", drop_variables=[\"time_bnds\"])\n\nds\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<xarray.Dataset>\nDimensions: (level: 17, lat: 73, lon: 144, time: 80)\nCoordinates:\n * level (level) float32 1e+03 925.0 850.0 700.0 ... 50.0 30.0 20.0 10.0\n * lat (lat) float32 90.0 87.5 85.0 82.5 80.0 ... -82.5 -85.0 -87.5 -90.0\n * lon (lon) float32 0.0 2.5 5.0 7.5 10.0 ... 350.0 352.5 355.0 357.5\n * time (time) datetime64[ns] 2024-01-01 2024-01-02 ... 2024-03-20\nData variables:\n air (time, level, lat, lon) float32 ...\nAttributes:\n Conventions: COARDS\n title: mean daily NMC reanalysis (2014)\n history: created 2013/12 by Hoop (netCDF2.3)\n description: Data is from NMC initialized reanalysis\\n(4x/day). It co...\n platform: Model\n dataset_title: NCEP-NCAR Reanalysis 1\n References: http://www.psl.noaa.gov/data/gridded/data.ncep.reanalysis...xarray.DatasetDimensions:level: 17lat: 73lon: 144time: 80Coordinates: (4)level(level)float321e+03 925.0 850.0 ... 20.0 10.0units :millibaractual_range :[1000. 10.]long_name :Levelpositive :downGRIB_id :100GRIB_name :hPaaxis :Zarray([1000., 925., 850., 700., 600., 500., 400., 300., 250., 200.,\n 150., 100., 70., 50., 30., 20., 10.], dtype=float32)lat(lat)float3290.0 87.5 85.0 ... -87.5 -90.0units :degrees_northactual_range :[ 90. -90.]long_name :Latitudestandard_name :latitudeaxis :Yarray([ 90. , 87.5, 85. , 82.5, 80. , 77.5, 75. , 72.5, 70. , 67.5,\n 65. , 62.5, 60. , 57.5, 55. , 52.5, 50. , 47.5, 45. , 42.5,\n 40. , 37.5, 35. , 32.5, 30. , 27.5, 25. , 22.5, 20. , 17.5,\n 15. , 12.5, 10. , 7.5, 5. , 2.5, 0. , -2.5, -5. , -7.5,\n -10. , -12.5, -15. , -17.5, -20. , -22.5, -25. , -27.5, -30. , -32.5,\n -35. , -37.5, -40. , -42.5, -45. , -47.5, -50. , -52.5, -55. , -57.5,\n -60. , -62.5, -65. , -67.5, -70. , -72.5, -75. , -77.5, -80. , -82.5,\n -85. , -87.5, -90. ], dtype=float32)lon(lon)float320.0 2.5 5.0 ... 352.5 355.0 357.5units :degrees_eastlong_name :Longitudeactual_range :[ 0. 357.5]standard_name :longitudeaxis :Xarray([ 0. , 2.5, 5. , 7.5, 10. , 12.5, 15. , 17.5, 20. , 22.5,\n 25. , 27.5, 30. , 32.5, 35. , 37.5, 40. , 42.5, 45. , 47.5,\n 50. , 52.5, 55. , 57.5, 60. , 62.5, 65. , 67.5, 70. , 72.5,\n 75. , 77.5, 80. , 82.5, 85. , 87.5, 90. , 92.5, 95. , 97.5,\n 100. , 102.5, 105. , 107.5, 110. , 112.5, 115. , 117.5, 120. , 122.5,\n 125. , 127.5, 130. , 132.5, 135. , 137.5, 140. , 142.5, 145. , 147.5,\n 150. , 152.5, 155. , 157.5, 160. , 162.5, 165. , 167.5, 170. , 172.5,\n 175. , 177.5, 180. , 182.5, 185. , 187.5, 190. , 192.5, 195. , 197.5,\n 200. , 202.5, 205. , 207.5, 210. , 212.5, 215. , 217.5, 220. , 222.5,\n 225. , 227.5, 230. , 232.5, 235. , 237.5, 240. , 242.5, 245. , 247.5,\n 250. , 252.5, 255. , 257.5, 260. , 262.5, 265. , 267.5, 270. , 272.5,\n 275. , 277.5, 280. , 282.5, 285. , 287.5, 290. , 292.5, 295. , 297.5,\n 300. , 302.5, 305. , 307.5, 310. , 312.5, 315. , 317.5, 320. , 322.5,\n 325. , 327.5, 330. , 332.5, 335. , 337.5, 340. , 342.5, 345. , 347.5,\n 350. , 352.5, 355. , 357.5], dtype=float32)time(time)datetime64[ns]2024-01-01 ... 2024-03-20long_name :Timedelta_t :0000-00-01 00:00:00standard_name :timeaxis :Tavg_period :0000-00-01 00:00:00coordinate_defines :startactual_range :[1963536. 1965432.]array(['2024-01-01T00:00:00.000000000', '2024-01-02T00:00:00.000000000',\n '2024-01-03T00:00:00.000000000', '2024-01-04T00:00:00.000000000',\n '2024-01-05T00:00:00.000000000', '2024-01-06T00:00:00.000000000',\n '2024-01-07T00:00:00.000000000', '2024-01-08T00:00:00.000000000',\n '2024-01-09T00:00:00.000000000', '2024-01-10T00:00:00.000000000',\n '2024-01-11T00:00:00.000000000', '2024-01-12T00:00:00.000000000',\n '2024-01-13T00:00:00.000000000', '2024-01-14T00:00:00.000000000',\n '2024-01-15T00:00:00.000000000', '2024-01-16T00:00:00.000000000',\n '2024-01-17T00:00:00.000000000', '2024-01-18T00:00:00.000000000',\n '2024-01-19T00:00:00.000000000', '2024-01-20T00:00:00.000000000',\n '2024-01-21T00:00:00.000000000', '2024-01-22T00:00:00.000000000',\n '2024-01-23T00:00:00.000000000', '2024-01-24T00:00:00.000000000',\n '2024-01-25T00:00:00.000000000', '2024-01-26T00:00:00.000000000',\n '2024-01-27T00:00:00.000000000', '2024-01-28T00:00:00.000000000',\n '2024-01-29T00:00:00.000000000', '2024-01-30T00:00:00.000000000',\n '2024-01-31T00:00:00.000000000', '2024-02-01T00:00:00.000000000',\n '2024-02-02T00:00:00.000000000', '2024-02-03T00:00:00.000000000',\n '2024-02-04T00:00:00.000000000', '2024-02-05T00:00:00.000000000',\n '2024-02-06T00:00:00.000000000', '2024-02-07T00:00:00.000000000',\n '2024-02-08T00:00:00.000000000', '2024-02-09T00:00:00.000000000',\n '2024-02-10T00:00:00.000000000', '2024-02-11T00:00:00.000000000',\n '2024-02-12T00:00:00.000000000', '2024-02-13T00:00:00.000000000',\n '2024-02-14T00:00:00.000000000', '2024-02-15T00:00:00.000000000',\n '2024-02-16T00:00:00.000000000', '2024-02-17T00:00:00.000000000',\n '2024-02-18T00:00:00.000000000', '2024-02-19T00:00:00.000000000',\n '2024-02-20T00:00:00.000000000', '2024-02-21T00:00:00.000000000',\n '2024-02-22T00:00:00.000000000', '2024-02-23T00:00:00.000000000',\n '2024-02-24T00:00:00.000000000', '2024-02-25T00:00:00.000000000',\n '2024-02-26T00:00:00.000000000', '2024-02-27T00:00:00.000000000',\n '2024-02-28T00:00:00.000000000', '2024-02-29T00:00:00.000000000',\n '2024-03-01T00:00:00.000000000', '2024-03-02T00:00:00.000000000',\n '2024-03-03T00:00:00.000000000', '2024-03-04T00:00:00.000000000',\n '2024-03-05T00:00:00.000000000', '2024-03-06T00:00:00.000000000',\n '2024-03-07T00:00:00.000000000', '2024-03-08T00:00:00.000000000',\n '2024-03-09T00:00:00.000000000', '2024-03-10T00:00:00.000000000',\n '2024-03-11T00:00:00.000000000', '2024-03-12T00:00:00.000000000',\n '2024-03-13T00:00:00.000000000', '2024-03-14T00:00:00.000000000',\n '2024-03-15T00:00:00.000000000', '2024-03-16T00:00:00.000000000',\n '2024-03-17T00:00:00.000000000', '2024-03-18T00:00:00.000000000',\n '2024-03-19T00:00:00.000000000', '2024-03-20T00:00:00.000000000'],\n dtype='datetime64[ns]')Data variables: (1)air(time, level, lat, lon)float32...long_name :mean Daily Air temperatureunits :degKprecision :2GRIB_id :11GRIB_name :TMPvar_desc :Air temperaturelevel_desc :Pressure Levelsstatistic :Meanparent_stat :Individual Obsvalid_range :[150. 350.]dataset :NCEP Reanalysis Daily Averagesactual_range :[177.2 316.52496][14296320 values with dtype=float32]Indexes: (4)levelPandasIndexPandasIndex(Index([1000.0, 925.0, 850.0, 700.0, 600.0, 500.0, 400.0, 300.0, 250.0,\n 200.0, 150.0, 100.0, 70.0, 50.0, 30.0, 20.0, 10.0],\n dtype='float32', name='level'))latPandasIndexPandasIndex(Index([ 90.0, 87.5, 85.0, 82.5, 80.0, 77.5, 75.0, 72.5, 70.0, 67.5,\n 65.0, 62.5, 60.0, 57.5, 55.0, 52.5, 50.0, 47.5, 45.0, 42.5,\n 40.0, 37.5, 35.0, 32.5, 30.0, 27.5, 25.0, 22.5, 20.0, 17.5,\n 15.0, 12.5, 10.0, 7.5, 5.0, 2.5, 0.0, -2.5, -5.0, -7.5,\n -10.0, -12.5, -15.0, -17.5, -20.0, -22.5, -25.0, -27.5, -30.0, -32.5,\n -35.0, -37.5, -40.0, -42.5, -45.0, -47.5, -50.0, -52.5, -55.0, -57.5,\n -60.0, -62.5, -65.0, -67.5, -70.0, -72.5, -75.0, -77.5, -80.0, -82.5,\n -85.0, -87.5, -90.0],\n dtype='float32', name='lat'))lonPandasIndexPandasIndex(Index([ 0.0, 2.5, 5.0, 7.5, 10.0, 12.5, 15.0, 17.5, 20.0, 22.5,\n ...\n 335.0, 337.5, 340.0, 342.5, 345.0, 347.5, 350.0, 352.5, 355.0, 357.5],\n dtype='float32', name='lon', length=144))timePandasIndexPandasIndex(DatetimeIndex(['2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04',\n '2024-01-05', '2024-01-06', '2024-01-07', '2024-01-08',\n '2024-01-09', '2024-01-10', '2024-01-11', '2024-01-12',\n '2024-01-13', '2024-01-14', '2024-01-15', '2024-01-16',\n '2024-01-17', '2024-01-18', '2024-01-19', '2024-01-20',\n '2024-01-21', '2024-01-22', '2024-01-23', '2024-01-24',\n '2024-01-25', '2024-01-26', '2024-01-27', '2024-01-28',\n '2024-01-29', '2024-01-30', '2024-01-31', '2024-02-01',\n '2024-02-02', '2024-02-03', '2024-02-04', '2024-02-05',\n '2024-02-06', '2024-02-07', '2024-02-08', '2024-02-09',\n '2024-02-10', '2024-02-11', '2024-02-12', '2024-02-13',\n '2024-02-14', '2024-02-15', '2024-02-16', '2024-02-17',\n '2024-02-18', '2024-02-19', '2024-02-20', '2024-02-21',\n '2024-02-22', '2024-02-23', '2024-02-24', '2024-02-25',\n '2024-02-26', '2024-02-27', '2024-02-28', '2024-02-29',\n '2024-03-01', '2024-03-02', '2024-03-03', '2024-03-04',\n '2024-03-05', '2024-03-06', '2024-03-07', '2024-03-08',\n '2024-03-09', '2024-03-10', '2024-03-11', '2024-03-12',\n '2024-03-13', '2024-03-14', '2024-03-15', '2024-03-16',\n '2024-03-17', '2024-03-18', '2024-03-19', '2024-03-20'],\n dtype='datetime64[ns]', name='time', freq=None))Attributes: (7)Conventions :COARDStitle :mean daily NMC reanalysis (2014)history :created 2013/12 by Hoop (netCDF2.3)description :Data is from NMC initialized reanalysis\n(4x/day). It consists of most variables interpolated to\npressure surfaces from model (sigma) surfaces.platform :Modeldataset_title :NCEP-NCAR Reanalysis 1References :http://www.psl.noaa.gov/data/gridded/data.ncep.reanalysis.html\n\n\n\nbase_map = ds.hvplot(\"lon\", \"lat\")\nbase_map\n\n\n\n\n\n\nCustomizing\nAdd keywords such as coastline, cmap, and framewise=False (for consistent colorbar) to the call for a much more polished plot!\nFor better compatibility, I convert longitudes from 0:360 to -180:180 and sort–many packages just work better that way.\n\n# for interactivity purposes on the blog, limit the number of times and levels\nds_sel = ds.isel(time=slice(0, 3), level=slice(0, 8))\nds_sel[\"lon\"] = (ds_sel[\"lon\"] + 180) % 360 - 180\nds_sel = ds_sel.sortby(\"lon\")\n\nmap_plot = ds_sel.hvplot(\n \"lon\",\n \"lat\",\n coastline=True,\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n framewise=False,\n dynamic=False,\n)\nmap_plot\n\n\n\n\n\n \n\n\n\n\n\n\nFixed latitude cross section\nWe can easily show a static, vertical cross section of the dataset too!\n\nds_cs = ds_sel.sel(lat=50) # cross section across 50°N\n\n# cs -> cross section\ncs_plot = ds_cs.hvplot(\n \"lon\",\n \"level\",\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n flip_yaxis=True,\n framewise=False,\n dynamic=False,\n)\n\ncs_plot\n\n/Users/ahuang/miniconda3/envs/panel/lib/python3.10/site-packages/holoviews/core/data/xarray.py:340: UserWarning: The `squeeze` kwarg to GroupBy is being removed.Pass .groupby(..., squeeze=False) to disable squeezing, which is the new default, and to silence this warning.\n for k, v in dataset.data.groupby(index_dims[0].name):\nWARNING:param.Image10741: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\nWARNING:param.Image10741: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\nWARNING:param.Image10779: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\nWARNING:param.Image10779: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\nWARNING:param.Image10817: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\nWARNING:param.Image10817: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\n\n\n\n\n\n\n \n\n\n\n\n\n\nDiagonal cross section\nThis is only a cross section across a fixed latitude; what if we wanted a cross section across a diagonal?\nWe can use MetPy’s cross_section function to interpolate the data along any line!\nIt’s crucial to note that the start and end keywords follow latitude-longitude (y, x) pair, NOT (x, y)!\n\nds_sel = ds_sel.metpy.parse_cf() # so it contains proper metadata for metpy to recognize\n\nds_cs = cross_section(ds_sel.isel(time=0), start=(50, -130), end=(50, -50))\nds_cs\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<xarray.Dataset>\nDimensions: (level: 8, index: 100)\nCoordinates:\n * level (level) float32 1e+03 925.0 850.0 700.0 600.0 500.0 400.0 300.0\n time datetime64[ns] 2024-01-01\n metpy_crs object Projection: latitude_longitude\n lon (index) float64 -130.0 -129.4 -128.7 ... -51.3 -50.65 -50.0\n lat (index) float64 50.0 50.27 50.53 50.79 ... 50.79 50.53 50.27 50.0\n * index (index) int64 0 1 2 3 4 5 6 7 8 9 ... 91 92 93 94 95 96 97 98 99\nData variables:\n air (level, index) float64 281.1 280.8 280.5 ... 225.2 225.4 225.7\nAttributes:\n Conventions: COARDS\n title: mean daily NMC reanalysis (2014)\n history: created 2013/12 by Hoop (netCDF2.3)\n description: Data is from NMC initialized reanalysis\\n(4x/day). It co...\n platform: Model\n dataset_title: NCEP-NCAR Reanalysis 1\n References: http://www.psl.noaa.gov/data/gridded/data.ncep.reanalysis...xarray.DatasetDimensions:level: 8index: 100Coordinates: (6)level(level)float321e+03 925.0 850.0 ... 400.0 300.0units :millibaractual_range :[1000. 10.]long_name :Levelpositive :downGRIB_id :100GRIB_name :hPaaxis :Z_metpy_axis :verticalarray([1000., 925., 850., 700., 600., 500., 400., 300.], dtype=float32)time()datetime64[ns]2024-01-01long_name :Timedelta_t :0000-00-01 00:00:00standard_name :timeaxis :Tavg_period :0000-00-01 00:00:00coordinate_defines :startactual_range :[1963536. 1965432.]_metpy_axis :timearray('2024-01-01T00:00:00.000000000', dtype='datetime64[ns]')metpy_crs()objectProjection: latitude_longitudearray(<metpy.plots.mapping.CFProjection object at 0x29f604970>,\n dtype=object)lon(index)float64-130.0 -129.4 ... -50.65 -50.0_metpy_axis :x,longitudearray([-130. , -129.351248, -128.695292, -128.032086, -127.361592,\n -126.683773, -125.998601, -125.30605 , -124.606102, -123.898745,\n -123.183973, -122.461787, -121.732197, -120.995218, -120.250873,\n -119.499196, -118.740227, -117.974015, -117.200619, -116.420107,\n -115.632558, -114.838059, -114.036707, -113.228612, -112.413892,\n -111.592677, -110.765106, -109.931333, -109.091518, -108.245835,\n -107.394468, -106.537612, -105.675472, -104.808265, -103.936217,\n -103.059565, -102.178555, -101.293442, -100.404492, -99.511978,\n -98.61618 , -97.717388, -96.815898, -95.912011, -95.006036,\n -94.098285, -93.189075, -92.278726, -91.367562, -90.455909,\n -89.544091, -88.632438, -87.721274, -86.810925, -85.901715,\n -84.993964, -84.087989, -83.184102, -82.282612, -81.38382 ,\n -80.488022, -79.595508, -78.706558, -77.821445, -76.940435,\n -76.063783, -75.191735, -74.324528, -73.462388, -72.605532,\n -71.754165, -70.908482, -70.068667, -69.234894, -68.407323,\n -67.586108, -66.771388, -65.963293, -65.161941, -64.367442,\n -63.579893, -62.799381, -62.025985, -61.259773, -60.500804,\n -59.749127, -59.004782, -58.267803, -57.538213, -56.816027,\n -56.101255, -55.393898, -54.69395 , -54.001399, -53.316227,\n -52.638408, -51.967914, -51.304708, -50.648752, -50. ])lat(index)float6450.0 50.27 50.53 ... 50.27 50.0units :degrees_northactual_range :[ 90. -90.]long_name :Latitudestandard_name :latitudeaxis :Y_metpy_axis :y,latitudearray([50. , 50.265546, 50.527419, 50.785545, 51.039847, 51.290249,\n 51.536674, 51.779045, 52.017283, 52.251309, 52.481044, 52.706406,\n 52.927317, 53.143694, 53.355457, 53.562525, 53.764816, 53.96225 ,\n 54.154746, 54.342222, 54.5246 , 54.701799, 54.873741, 55.040346,\n 55.201539, 55.357244, 55.507384, 55.651888, 55.790683, 55.923699,\n 56.050868, 56.172123, 56.2874 , 56.396638, 56.499777, 56.596761,\n 56.687534, 56.772047, 56.850251, 56.922102, 56.987556, 57.046577,\n 57.099129, 57.145181, 57.184707, 57.217681, 57.244084, 57.263901,\n 57.277118, 57.283729, 57.283729, 57.277118, 57.263901, 57.244084,\n 57.217681, 57.184707, 57.145181, 57.099129, 57.046577, 56.987556,\n 56.922102, 56.850251, 56.772047, 56.687534, 56.596761, 56.499777,\n 56.396638, 56.2874 , 56.172123, 56.050868, 55.923699, 55.790683,\n 55.651888, 55.507384, 55.357244, 55.201539, 55.040346, 54.873741,\n 54.701799, 54.5246 , 54.342222, 54.154746, 53.96225 , 53.764816,\n 53.562525, 53.355457, 53.143694, 52.927317, 52.706406, 52.481044,\n 52.251309, 52.017283, 51.779045, 51.536674, 51.290249, 51.039847,\n 50.785545, 50.527419, 50.265546, 50. ])index(index)int640 1 2 3 4 5 6 ... 94 95 96 97 98 99array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,\n 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,\n 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,\n 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,\n 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,\n 90, 91, 92, 93, 94, 95, 96, 97, 98, 99])Data variables: (1)air(level, index)float64281.1 280.8 280.5 ... 225.4 225.7long_name :mean Daily Air temperatureunits :degKprecision :2GRIB_id :11GRIB_name :TMPvar_desc :Air temperaturelevel_desc :Pressure Levelsstatistic :Meanparent_stat :Individual Obsvalid_range :[150. 350.]dataset :NCEP Reanalysis Daily Averagesactual_range :[177.2 316.52496]array([[281.09997559, 280.76368769, 280.45708445, 280.18001488,\n 279.92457186, 279.66897535, 279.4432314 , 279.24705874,\n 279.1026052 , 278.99049885, 278.8890329 , 278.53752648,\n 278.21761199, 277.86324677, 277.475166 , 277.10850135,\n 276.73981715, 276.34316267, 275.95026938, 275.58729498,\n 275.20912721, 274.8342246 , 274.53270269, 274.26925149,\n 274.1177599 , 274.09114056, 274.11417166, 274.18790955,\n 274.34082436, 274.53863933, 274.76976125, 274.9679607 ,\n 275.19255603, 275.39273183, 275.43323969, 275.48083168,\n 275.42385311, 275.16859959, 274.90441341, 274.44634984,\n 273.81966965, 273.17851963, 272.32861886, 271.40317403,\n 270.46713435, 269.44720815, 268.4226989 , 267.43395472,\n 266.56309104, 265.69699705, 264.99147203, 264.44770917,\n 263.91229286, 263.65724747, 263.49747162, 263.34734035,\n 263.44185191, 263.54462225, 263.66976489, 263.84940542,\n 264.04116025, 264.17790005, 264.24720727, 264.3315709 ,\n 264.23685665, 264.04528804, 263.8685412 , 263.47953678,\n 263.02759741, 262.57462565, 262.00225298, 261.39037597,\n 260.75233771, 260.20017068, 259.59973066, 258.93662619,\n 258.6162301 , 258.27618664, 257.94369214, 258.10151435,\n...\n 220.35413252, 220.44009887, 220.60529945, 220.80155964,\n 221.05597032, 221.31881072, 221.5903184 , 221.86369459,\n 222.07076988, 222.2833265 , 222.48612577, 222.58546997,\n 222.68656906, 222.76915975, 222.781198 , 222.79376623,\n 222.78120947, 222.72421114, 222.66762066, 222.58824944,\n 222.48964914, 222.39116211, 222.27927327, 222.16344389,\n 222.04820375, 221.90873689, 221.76963523, 221.62212846,\n 221.44703718, 221.27355286, 221.09237004, 220.90366218,\n 220.71708126, 220.54850203, 220.38616891, 220.22512836,\n 220.15189339, 220.07541237, 220.02827945, 220.07754574,\n 220.11345151, 220.21374406, 220.39025654, 220.54392091,\n 220.7779465 , 221.04121986, 221.27005798, 221.58287227,\n 221.89564483, 222.17295905, 222.46951649, 222.74494707,\n 222.99090821, 223.23492649, 223.4748167 , 223.7093426 ,\n 223.87815131, 224.09605569, 224.32771623, 224.46335932,\n 224.591225 , 224.73813728, 224.79490047, 224.81806807,\n 224.88133099, 224.9253183 , 224.89764013, 224.92222066,\n 224.99996023, 224.96708267, 224.88958545, 224.84637986,\n 224.80732101, 224.77151335, 224.78130192, 224.83704081,\n 224.96595264, 225.15171911, 225.38800341, 225.67500305]])Indexes: (2)levelPandasIndexPandasIndex(Index([1000.0, 925.0, 850.0, 700.0, 600.0, 500.0, 400.0, 300.0], dtype='float32', name='level'))indexPandasIndexPandasIndex(Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,\n 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,\n 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,\n 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,\n 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,\n 90, 91, 92, 93, 94, 95, 96, 97, 98, 99],\n dtype='int64', name='index'))Attributes: (7)Conventions :COARDStitle :mean daily NMC reanalysis (2014)history :created 2013/12 by Hoop (netCDF2.3)description :Data is from NMC initialized reanalysis\n(4x/day). It consists of most variables interpolated to\npressure surfaces from model (sigma) surfaces.platform :Modeldataset_title :NCEP-NCAR Reanalysis 1References :http://www.psl.noaa.gov/data/gridded/data.ncep.reanalysis.html\n\n\nSince the x dimension is now index, we need to also properly format the xticks labels with lat and lon coordinates.\n\nxticks = [\n (i, f\"({abs(lon):.0f}°W, {lat:.0f}°N)\") # format the xticks\n for i, (lat, lon) in enumerate(zip(ds_cs[\"lat\"], ds_cs[\"lon\"]))\n]\n\nds_cs.hvplot(\n \"index\",\n \"level\",\n cmap=\"RdYlBu_r\",\n xticks=xticks[::15],\n xlabel=\"Coordinates\",\n clabel=\"Air Temperature [K]\",\n flip_yaxis=True,\n framewise=False,\n dynamic=False,\n)\n\nWARNING:param.Image11026: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\nWARNING:param.Image11026: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\n\n\n\n\n\n\n \n\n\n\n\n\n\nJoined together\nFinally, we can lay both plots by “adding” them.\n\n(map_plot + cs_plot).cols(1)"
},
{
"objectID": "posts/holoviews_streams/index.html#checkpoint-1",
"href": "posts/holoviews_streams/index.html#checkpoint-1",
"title": "HoloViews Streams for Exploring Multidimensional Data",
"section": "Checkpoint 1",
"text": "Checkpoint 1\nHere’s a cleaned up, copy/pastable version of the code thus far!\nimport subprocess\nfrom pathlib import Path\n\nimport param\nimport numpy as np\nimport xarray as xr\nimport panel as pn\nimport hvplot.xarray\nimport geoviews as gv\nimport holoviews as hv\nfrom geoviews.streams import PolyDraw\nfrom metpy.interpolate import cross_section\nimport cartopy.crs as ccrs\n\npn.extension()\ngv.extension(\"bokeh\")\n\nif not Path(\"air.2024.nc\").exists():\n subprocess.run(\"wget https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis/Dailies/pressure/air.2024.nc\", shell=True)\n\n# process data\nds = xr.open_dataset(\"air.2024.nc\", drop_variables=[\"time_bnds\"])\nds_sel = ds.isel(time=slice(0, 3), level=slice(0, 10)).metpy.parse_cf()\nds_sel[\"lon\"] = (ds_sel[\"lon\"] + 180) % 360 - 180\nds_sel = ds_sel.sortby(\"lon\")\nds_cs = cross_section(ds_sel.isel(time=0), start=(50, -130), end=(50, -50))\n\n# visualize data\nmap_plot = ds_sel.hvplot(\n \"lon\",\n \"lat\",\n coastline=True,\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n framewise=False,\n dynamic=False,\n)\n\nxticks = [\n (i, f\"({abs(lon):.0f}°W, {lat:.0f}°N)\")\n for i, (lat, lon) in enumerate(zip(ds_cs[\"lat\"], ds_cs[\"lon\"]))\n]\ncs_plot = ds_cs.hvplot(\n \"index\",\n \"level\",\n xticks=xticks[::15],\n xlabel=\"Coordinates\",\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n flip_yaxis=True,\n framewise=False,\n dynamic=False,\n)\n\n(map_plot + cs_plot).cols(1)"
},
{
"objectID": "posts/holoviews_streams/index.html#working-with-streams",
"href": "posts/holoviews_streams/index.html#working-with-streams",
"title": "HoloViews Streams for Exploring Multidimensional Data",
"section": "Working with streams",
"text": "Working with streams\nNow that we have a foundation, we can attach a stream to the plo to allow users to interact with the plot.\nTo see what streams are available, I check out the HoloViews Reference Gallery.\nSince I want to draw a line across the map to eventually show a cross section, I chose PolyDraw.\n\n\nMinimal example\nTo start using:\n\nclick on the PolygonDraw tool in the toolbar\ndouble tap on the plot to start drawing a polygon\nsingle tap on each vertex of the polygon\ndouble tap on the last vertex to finish drawing\n\n\ncs_path = gv.Path(([-80, -50, -30], [28, 48, 18]), crs=ccrs.PlateCarree())\nstream = PolyDraw(source=cs_path, num_objects=1)\n\ncs_path\n\n\n\n\nWe can access the data from the drawn path using the stream.data attribute.\n\nstream.data\n\n{'xs': [array([-80. , -53.67907524, -50. , -35.41679382,\n -30. ])],\n 'ys': [array([28. , 45.54728317, 48. , 26.12519073, 18. ])]}\n\n\nLet’s make something happen when we draw a path on the map by using a DynamicMap.\nThe DynamicMap will mirror the vertexes of the drawn data.\n\nimport geoviews as gv\n\ndef copy_and_shift_up(data):\n # error handling; return empty points if there's no data or there are no valid edges\n if not data or not data[\"xs\"] or data[\"xs\"][0][0] == data[\"xs\"][0][1]:\n return gv.Points({\"Longitude\": [], \"Latitude\": []})\n\n xs = data[\"xs\"][0] # 0 to select first edge\n ys = data[\"ys\"][0]\n return gv.Points({\"Longitude\": xs, \"Latitude\": ys}).opts(color=\"red\")\n\n\ncs_path = gv.Path(([-80, -50, -30], [28, 48, 18]), crs=ccrs.PlateCarree()).opts(active_tools=[\"poly_draw\"])\nstream = PolyDraw(source=cs_path, num_objects=1)\n\ncs_path_shifted = gv.DynamicMap(copy_and_shift_up, streams=[stream])\ncs_path + cs_path_shifted\n\n\n\n\n\nWe can see that the right plot reacts to changes to the drawn path on the left plot.\n\n\nInteractive cross section\nNow, let’s take a step back to get back to the original goal, which is we want to create a cross section plot based on the path drawn on the map.\nWe can do this by:\n\nLinking the cross section path (cs_path) to the map by overlaying and laying out the map alongside the cross section plot.\nWrapping the cross section computation and plot inside a DynamicMap so that changes to the cs_path data changes triggers an update to the cross section.\nUsing a for loop for the cross section computation to handle multiple edges / segments drawn.\n\nSince the data returned from cs_path ranges from -180 to 180, we’ll need to match that in our dataaset too.\n\ndef create_cross_section(data):\n if not data or not data[\"xs\"] or data[\"xs\"][0][0] == data[\"xs\"][0][1]:\n return hv.Image([]).opts(width=730, colorbar=True)\n\n xs = data[\"xs\"][0]\n ys = data[\"ys\"][0]\n ds_cs_list = []\n for i in range(len(xs) - 1): # create cross section for each segment\n ds_cs_list.append(\n cross_section(\n ds_sel.isel(time=0),\n start=(ys[0 + i], xs[0 + i]),\n end=(ys[1 + i], xs[1 + i]),\n )\n )\n ds_cs = xr.concat(ds_cs_list, dim=\"index\")\n\n xticks = [\n (i, f\"({abs(lon):.0f}°W, {lat:.0f}°N)\")\n for i, (lat, lon) in enumerate(zip(ds_cs[\"lat\"], ds_cs[\"lon\"]))\n ]\n cs_plot = ds_cs.hvplot(\n \"index\",\n \"level\",\n xticks=xticks[::15],\n xlabel=\"Coordinates\",\n cmap=\"RdYlBu_r\",\n flip_yaxis=True,\n dynamic=False,\n )\n return cs_plot\n\n# create stream\ncs_path = gv.Path([], crs=ccrs.PlateCarree()).opts(color=\"red\", line_width=2)\nstream = PolyDraw(source=cs_path, num_objects=1)\n\n# attach stream\ncs_plot = gv.DynamicMap(create_cross_section, streams=[stream])\n\n# layout\nmap_overlay = (map_plot * cs_path).opts(active_tools=[\"poly_draw\"])\n(map_overlay + cs_plot).cols(1)\n\n\n\n\nWARNING:param.Image41637: Image dimension level is not evenly sampled to relative tolerance of 0.001. Please use the QuadMesh element for irregularly sampled data or set a higher tolerance on hv.config.image_rtol or the rtol parameter in the Image constructor.\n\n\n\n\n\ncross_section\n\n\n\n\nSyncing time slider across plots\nSince the time slider only affects the first plot, we’ll need to convert the HoloMap overlay into a pn.pane.HoloViews object to extract the time slider.\nWe can then easily extract the widget from the map_plot and use it with the cs_plot!\n\nmap_pane = pn.pane.HoloViews(map_overlay)\n\nCall widget box to get the time slider.\n\ntime_slider = map_pane.widget_box[0]\ntime_slider\n\n\n\n\nWe change:\n\nour callback slightly to include the time slider’s param value (very important to use .param.value instead of .value or else it won’t update!)\nuse sel(time=value) instead of isel(time=0).\n\n\ndef create_cross_section(data, value): # new kwarg\n if not data or not data[\"xs\"] or data[\"xs\"][0][0] == data[\"xs\"][0][1]:\n return hv.Image([]).opts(width=730, clabel=\"Air Temperature [K]\", colorbar=True)\n\n xs = data[\"xs\"][0]\n ys = data[\"ys\"][0]\n\n ds_cs_list = []\n for i in range(len(xs) - 1):\n ds_cs_list.append(\n cross_section(\n ds_sel.sel(time=value),\n start=(ys[0 + i], xs[0 + i]),\n end=(ys[1 + i], xs[1 + i]),\n )\n )\n ds_cs = xr.concat(ds_cs_list, dim=\"index\")\n ds_cs[\"index\"] = np.arange(len(ds_cs[\"index\"]))\n\n xticks = [\n (i, f\"({abs(lon):.0f}°W, {lat:.0f}°N)\")\n for i, (lat, lon) in enumerate(zip(ds_cs[\"lat\"], ds_cs[\"lon\"]))\n ]\n cs_plot = ds_cs.hvplot(\n \"index\",\n \"level\",\n xticks=xticks[::15],\n xlabel=\"Coordinates\",\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n flip_yaxis=True,\n dynamic=False,\n )\n return cs_plot\n\ncs_plot = gv.DynamicMap(create_cross_section, streams=[stream, time_slider.param.value]) # new stream\n\nNow, let’s put everything together!\nWe need to use pn.Column instead of adding here because map_overlay is no longer a HoloViews object.\n\npn.Row(pn.Column(map_pane, cs_plot), map_pane.widget_box)"
},
{
"objectID": "posts/holoviews_streams/index.html#checkpoint-2",
"href": "posts/holoviews_streams/index.html#checkpoint-2",
"title": "HoloViews Streams for Exploring Multidimensional Data",
"section": "Checkpoint 2",
"text": "Checkpoint 2\nHere’s the copy pastable code for the second checkpoint:\nimport subprocess\nfrom pathlib import Path\n\nimport param\nimport numpy as np\nimport xarray as xr\nimport panel as pn\nimport hvplot.xarray\nimport geoviews as gv\nimport holoviews as hv\nfrom geoviews.streams import PolyDraw\nfrom metpy.interpolate import cross_section\nimport cartopy.crs as ccrs\n\npn.extension()\ngv.extension(\"bokeh\")\n\ndef create_cross_section(data, value):\n if not data or not data[\"xs\"] or data[\"xs\"][0][0] == data[\"xs\"][0][1]:\n return hv.Image([]).opts(width=730, clabel=\"Air Temperature [K]\", colorbar=True)\n\n xs = data[\"xs\"][0]\n ys = data[\"ys\"][0]\n\n ds_cs_list = []\n for i in range(len(xs) - 1):\n ds_cs_list.append(\n cross_section(\n ds_sel,\n start=(ys[0 + i], xs[0 + i]),\n end=(ys[1 + i], xs[1 + i]),\n )\n )\n ds_cs = xr.concat(ds_cs_list, dim=\"index\")\n ds_cs[\"index\"] = np.arange(len(ds_cs[\"index\"]))\n\n xticks = [\n (i, f\"({abs(lon):.0f}°W, {lat:.0f}°N)\")\n for i, (lat, lon) in enumerate(zip(ds_cs[\"lat\"], ds_cs[\"lon\"]))\n ]\n cs_plot = ds_cs.hvplot(\n \"index\",\n \"level\",\n xticks=xticks[::15],\n xlabel=\"Coordinates\",\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n flip_yaxis=True,\n dynamic=False,\n )\n return cs_plot\n\nif not Path(\"air.2024.nc\").exists():\n subprocess.run(\"wget https://downloads.psl.noaa.gov/Datasets/ncep.reanalysis/Dailies/pressure/air.2024.nc\", shell=True)\n\n# process data\nds = xr.open_dataset(\"air.2024.nc\", drop_variables=[\"time_bnds\"])\nds_sel = ds.isel(time=slice(0, 3), level=slice(0, 10)).metpy.parse_cf()\nds_sel[\"lon\"] = (ds_sel[\"lon\"] + 180) % 360 - 180\nds_sel = ds_sel.sortby(\"lon\")\n\n# create base map\nmap_plot = ds_sel.hvplot(\n \"lon\",\n \"lat\",\n coastline=True,\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n framewise=False,\n dynamic=False,\n)\n\n# create stream\ncs_path = gv.Path([], crs=ccrs.PlateCarree()).opts(color=\"red\", line_width=2)\nstream = PolyDraw(source=cs_path, num_objects=1)\n\n# overlay\nmap_overlay = (map_plot * cs_path).opts(active_tools=[\"poly_draw\"])\nmap_pane = pn.pane.HoloViews(map_overlay)\n\n# attach stream\ntime_slider = map_pane.widget_box[0]\ncs_plot = gv.DynamicMap(create_cross_section, streams=[stream, time_slider.param.value])\n\npn.Row(pn.Column(map_pane, cs_plot), map_pane.widget_box)"
},
{
"objectID": "posts/holoviews_streams/index.html#encapsulating-into-param-class",
"href": "posts/holoviews_streams/index.html#encapsulating-into-param-class",
"title": "HoloViews Streams for Exploring Multidimensional Data",
"section": "Encapsulating into param class",
"text": "Encapsulating into param class\nNow, as you may notice, things are getting a tad complex and out of hand.\nFor the finale, I’ll demonstrate how to convert this into an extensible pn.viewable.Viewer class.\nThe main things I changed was:\n\nhvPlot -> HoloViews\nCreating a class to watch time and label\nManually creating DynamicMaps for each plot and writing their own custom callbacks\nMove streams to @param.depends\n\n\nimport param\nimport numpy as np\nimport xarray as xr\nimport panel as pn\nimport hvplot.xarray\nimport geoviews as gv\nimport holoviews as hv\nfrom geoviews.streams import PolyDraw\nfrom metpy.interpolate import cross_section\nimport cartopy.crs as ccrs\n\npn.extension()\ngv.extension(\"bokeh\")\n\n\nclass DataExplorer(pn.viewable.Viewer):\n\n ds = param.ClassSelector(class_=xr.Dataset)\n\n time = param.Selector()\n\n level = param.Selector()\n\n def __init__(self, ds: xr.Dataset, **params):\n super().__init__(**params)\n self.ds = ds\n\n # populate selectors\n self.param[\"time\"].objects = list(\n ds[\"time\"].dt.strftime(\"%Y-%m-%d %H:%M\").values\n )\n self.param[\"level\"].objects = list(ds[\"level\"].values)\n\n self.time = self.param[\"time\"].objects[0]\n self.level = self.param[\"level\"].objects[0]\n\n @param.depends(\"time\", \"level\")\n def _update_map(self):\n ds_sel = self.ds.sel(time=self.time, level=self.level)\n return gv.Image(\n ds_sel,\n kdims=[\"lon\", \"lat\"],\n vdims=[\"air\"],\n ).opts(\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n responsive=True,\n xaxis=None,\n yaxis=None,\n )\n\n @param.depends(\"_stream.data\", \"time\")\n def _update_cross_section(self):\n data = self._stream.data\n if not data or not data[\"xs\"]:\n data[\"xs\"] = [[-80, -80]]\n data[\"ys\"] = [[18, 28]]\n\n ds_sel = self.ds.sel(time=self.time)\n ds_sel = ds_sel.metpy.parse_cf()\n\n xs = data[\"xs\"][0]\n ys = data[\"ys\"][0]\n\n ds_cs_list = []\n for i in range(len(xs) - 1):\n ds_cs_list.append(\n cross_section(\n ds_sel,\n start=(ys[0 + i], xs[0 + i]),\n end=(ys[1 + i], xs[1 + i]),\n )\n )\n ds_cs = xr.concat(ds_cs_list, dim=\"index\")\n ds_cs[\"index\"] = np.arange(len(ds_cs[\"index\"]))\n\n xticks = [\n (i, f\"({lon:.0f}°E, {lat:.0f}°N)\")\n for i, (lat, lon) in enumerate(zip(ds_cs[\"lat\"], ds_cs[\"lon\"]))\n ]\n x_indices = np.linspace(0, len(xticks) - 1, 10).astype(int)\n xticks = [xticks[i] for i in x_indices]\n cs_plot = hv.Image(ds_cs, kdims=[\"index\", \"level\"], vdims=[\"air\"]).opts(\n xticks=xticks,\n xlabel=\"Coordinates\",\n cmap=\"RdYlBu_r\",\n clabel=\"Air Temperature [K]\",\n invert_yaxis=True,\n responsive=True,\n xrotation=45,\n )\n return cs_plot\n\n def __panel__(self):\n # create widgets\n time_slider = pn.widgets.DiscreteSlider.from_param(self.param[\"time\"])\n level_slider = pn.widgets.DiscreteSlider.from_param(self.param[\"level\"])\n\n # create plots\n self._cs_path = gv.Path([], crs=ccrs.PlateCarree()).opts(\n color=\"red\", line_width=2\n )\n self._stream = PolyDraw(source=self._cs_path, num_objects=1)\n\n map_plot = gv.DynamicMap(self._update_map)\n coastline = gv.feature.coastline()\n map_overlay = (map_plot * self._cs_path * coastline).opts(\n active_tools=[\"poly_draw\"]\n )\n\n self._cs_plot = gv.DynamicMap(self._update_cross_section).opts(framewise=False)\n\n sidebar = pn.Column(time_slider, level_slider)\n main = pn.Row(map_overlay, self._cs_plot, sizing_mode=\"stretch_both\")\n return pn.template.FastListTemplate(\n sidebar=[sidebar],\n main=[main],\n ).show()\n\n\nds = xr.open_dataset(\"air.2024.nc\", drop_variables=[\"time_bnds\"])\nDataExplorer(ds)\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\nLaunching server at http://localhost:58591\n\n\n\n\n\n\n\n\ntemplate\n\n\nNow, you could extend this easily and cleanly by adding new methods, like if you added a points stream:\n @param.depends(\"_point_stream.data\", \"level\")\n def _update_cross_section_timeseries(self):\n ...\nOur Discourse community is here for you!"
},
{
"objectID": "posts/hv_release_1.10/index.html",
"href": "posts/hv_release_1.10/index.html",
"title": "HoloViews 1.10 Release",
"section": "",
"text": "We are very pleased to announce the release of HoloViews 1.10!\nThis release contains a large number of features and improvements. Some highlights include:\nJupyterLab support:\nNew components:\nPlus many other bug fixes, enhancements and documentation improvements. For full details, see the Release Notes.\nIf you are using Anaconda, HoloViews can most easily be installed by executing the command conda install -c pyviz holoviews . Otherwise, use pip install holoviews."
},
{
"objectID": "posts/hv_release_1.10/index.html#jupyterlab-support",
"href": "posts/hv_release_1.10/index.html#jupyterlab-support",
"title": "HoloViews 1.10 Release",
"section": "JupyterLab support",
"text": "JupyterLab support\nWith JupyterLab now coming out of the alpha release stage, we have finally made HoloViews compatible with JupyterLab by creating the jupyterlab_pyviz extension. The extension can be installed with:\njupyter labextension install @pyviz/jupyterlab_pyviz\n\nThe JupyterLab extension provides all the interactivity of the classic notebook, and so both interfaces are now fully supported. Both classic notebook and JupyterLab now make it easier to work with streaming plots, because deleting or re-executing a cell in the classic notebook or JupyterLab now cleans up the plot and ensures that any streams are unsubscribed."
},
{
"objectID": "posts/hv_release_1.10/index.html#new-elements",
"href": "posts/hv_release_1.10/index.html#new-elements",
"title": "HoloViews 1.10 Release",
"section": "New elements",
"text": "New elements\nThe main improvement in this release is the addition of a large number of elements. A number of these elements build on the Graph element introduced earlier in the 1.9 release, including the Sankey, Chord and TriMesh elements. Other new elements include HexTiles for binning many points on a hexagonal grid, Violins for comparing distributions across multiple variables, Labels for plotting large collections of text labels, and Div for displaying arbitrary HTML alongside Bokeh-based plots and tables.\n\nSankey\nThe new Sankey element is a pure-Python port of d3-sankey. Like most other elements, it can be rendered using both Matplotlib and Bokeh. In Bokeh, all the usual interactivity will be supported, such as providing hover information and interactively highlighting connected nodes and edges. Here we have rendered energy flow to SVG with matplotlib:\n\n\n\nChord\nThe Chord element had been requested a number of times, because it had previously been supported in the now deprecated Bokeh Charts package. Thanks to Bokeh’s graph support, hovering and tapping on the Chord nodes highlights connected nodes, helping you make sense of even densely interconnected graphs:\n\n\n\n\n\n\n\n\n\n\nTriMesh\nAlso building on the graph capabilities is the TriMesh element, which allows defining arbitrary meshes from a set of nodes and a set of simplices (triangles defined as lists of node indexes). The TriMesh element allows easily visualizing Delaunay triangulations and even very large meshes, thanks to corresponding support added to datashader. Below we can see an example of a TriMesh colored by vertex value and an interpolated datashaded mesh of the Chesapeake Bay containing 1M triangles:\n\n\n\n\n\n\n\n\n\n\n\n\nHexTiles\nAnother often requested feature is the addition of a hexagonal bin plot, which can be very helpful in visualizing large collections of points. Thanks to the recent addition of a hex tiling glyph in the bokeh 0.12.15 release it was straightforward to add this support in the form of a [HexTiles element]((http://holoviews.org/reference/elements/bokeh/HexTiles.html), which supports both simple bin counts and weighted binning, and fixed or variable hex sizes.\nBelow we can see a HexTiles plot of ~7 million points representing the NYC population, where each hexagonal bin is scaled and colored by the bin value:\n\n\n\n\n\n\n\n\n\n\nViolin\nViolin elements have been one of the most frequently requested plot types since the Matplotlib-only Seaborn interface was deprecated from HoloViews. With this release a native implementation of violins was added for both Matplotlib and Bokeh, which allows comparing distributions across one or more independent variables:\n\n\n\nRadial HeatMap\nThanks to the contributions of Franz Woellert, the existing HeatMap element has now gained support for radial heatmaps. Radial heatmaps are useful for plotting quantities varying over some cyclic variable, such as the day of the week or time of day. Below we can see how the daily number of Taxi rides changes over the course of a year:\n\n\n\nLabels\nThe existing Text element allows adding text to a plot, but only one item at a time, which is not suitable for plotting the large collections of text items that many users have been requesting. The new Labels element provides vectorized text plotting, which is probably most often used to annotate data points or regions of another plot type. Here we show that it can also be used on its own, to plot unicode emoji characters arranged by semantic similarity using the t-SNE dimensionality reduction algorithm:\n\n\n\nDiv\nThe Div element is exclusive to Bokeh and allows embedding arbitrary HTML in a Bokeh plot. One simple example of the infinite variety of possible uses for Div is to display Pandas summary tables alongside a plot:\n\nbars + hv.Div(df.describe().to_html())"
},
{
"objectID": "posts/hv_release_1.10/index.html#editing-tools",
"href": "posts/hv_release_1.10/index.html#editing-tools",
"title": "HoloViews 1.10 Release",
"section": "Editing Tools",
"text": "Editing Tools\nIn the Bokeh 0.12.15 release, a new set of interactive tools were added to edit and draw different glyph types. These tools are now available from HoloViews as the PointDraw, PolyDraw, BoxEdit, and PolyEdit streams classes, which make the drawn or edited data available to work with from Python. The drawing tools open up the possibility for very complex interactivity and annotations, allowing users to create even very complex types of interactive applications.\n\n \n\nOne example of the many workflows now supported is to draw regions of interest on an image using BoxEdit, computing the mean value over time for each such region:"
},
{
"objectID": "posts/hv_release_1.10/index.html#setting-options",
"href": "posts/hv_release_1.10/index.html#setting-options",
"title": "HoloViews 1.10 Release",
"section": "Setting options",
"text": "Setting options\nThe new .options() method present on all viewable objects makes it much simpler to set options without worrying about the underlying difference between plot, style, and norm options. A comparison between the two APIs demonstrates how much more readable and easy to type the new approach is:\n\n# New options API\nimg.options(cmap='RdBu_r', colorbar=True, width=360, height=300)\n\n# Old opts API\nimg.opts(plot=dict(colorbar=True, width=360), style=dict(cmap='RdBu_r'));\n\nEach option still belongs to one of the three categories internally, depending on whether it is processed by HoloViews or passed down into the underlying plotting library, but the user no longer usually has to remember which options are in which category.\nIt is also now possible to explicitly declare the backend for each option, which makes it easier to support multiple backends:\n\nimg.options(width=360, backend='bokeh').options(fig_inches=(6, 6), backend='matplotlib');"
},
{
"objectID": "posts/hv_release_1.10/index.html#image-hover",
"href": "posts/hv_release_1.10/index.html#image-hover",
"title": "HoloViews 1.10 Release",
"section": "Image hover",