-
-
Notifications
You must be signed in to change notification settings - Fork 654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix SAPI 4 driver #17599
base: master
Are you sure you want to change the base?
Fix SAPI 4 driver #17599
Conversation
…rings and WaveOut device IDs.
Could you confirm what the experience is like when in secure mode? both on the secure desktop, or with the environment variable set. |
I guess we will have the same issue as the add-on store warning (#15261), i.e. the warning may be shown various times, maximum one time per profile. |
Also, in the UG, the SAPI4 paragraph says:
Should we add here that SAPI4 usage in NVDA is deprecated and will be removed in the future? |
For the user, does this warning dialog have any practical effect? As far as I know, many Chinese users rely on this synthesizer. |
@cary-rowen |
Hi @zstanecic I'm not complaining, I just thought about what effect this dialog can have on end users, so I made the above comment. I'd love to talk about the current state of TTS in Chinese, although it's a bit off topic, and it probably deserves a separate discussion. Regarding the AISound you mentioned, it may be just a temporary solution:
Regarding IBM Viovoice TTS:
Regarding Eloquence: Of course, for me, Vocalizer is my only choice. As for Sapi5 and oneCore they are really slow to respond. In summary, response speed is important. New technologies and new TTS are of course developing, but as you can see, Microsoft's natural voice is not yet supported. Although it can be supported through sapi5 through the Natural Voice Adapter, it is still limited by The response speed is not suitable for long-term use. Hope this will be clearer |
Hi @cary-rowen, It is very interesting to hear the history about chinese speech synthesizer and what exists, what's popular and widely used. |
@cary-rowen thank you for the questions. We have chosen to add this dialog as, while it may be irritating, we believe the experience of this being a surprise to users would be worse. Many users do not regularly read the changelog, which is not always translated into their language, so this is more likely to be noticed. Regarding TTS support, does eSpeak-ng have support for Chinese languages? I see the following options in NVDA with eSpeak:
Notwithstanding users' dislike of the sound of eSpeak (which is of course valid), are these voices unsatisfactory in other ways? For example, in English eSpeak just reads Chinese characters as "Chinese letter", does Chinese eSpeak have proper support for Chinese writing? Do other TTS engines support more Chinese languages? |
As far as I know, in China, the TTS and vocalizer of SAPI 4 are the most popular speech synthesis engines. SAPI 5 and OneCore have concerning speed and poor audio quality. From a user's perspective, it is unclear what exactly has been upgraded in SAPI 5—slower speed and unclear reading? We also need to consider what benefits removing this feature would bring to NVDA users, or if there are any compelling reasons to do so. Thanks! |
SAPI 4 is not included in recent versions of Windows, and it is no longer supported by Microsoft. SAPI 5 is a built-in component in Windows since Windows XP, but it is also quite old, and I'm not sure if Microsoft is still actively maintaining it. SAPI 5.3 supports parsing SSML in addition to its own proprietary XML format. But only built-in voices can fully utilize this feature and get most of the information from SSML. Third-party voice developers, unfortunately, can still only use the old interfaces that was designed for the proprietary XML format. Although the SAPI framework automatically converts SSML to compatible data format for third-party TTS engines, some SSML features that cannot be presented in the proprietary XML format are lost in the conversion, for example, the As for the newer interfaces: OneCore seems to be a weird "variation" based on SAPI 5. Their registry key structures are similar. You can even copy registry keys to make "OneCore-exclusive" voices usable* via SAPI 5. The problem is that Microsoft provide no documentation or support for third-party OneCore voices, so third-party voice vendors still have to use SAPI. Azure Speech SDK can only use Microsoft voices. It supports online Azure voices or offline neural/Apollo voices, but it's all from Microsoft. So, although both SAPI 4 and SAPI 5 are old and not actively being updated for a while, they are the only speech systems supported by not only many client applications, but also many third-party voice synthesizers. OneCore and Azure Speech SDK are not open to voice providers. |
#17592 makes SAPI 5 voices use WASAPI to improve their responsiveness. I think that a similar approach can be applied to SAPI 4 voices, making SAPI 4 voices use WASAPI as well. I want to know the performance level of SAPI 4 voices. Are they already good enough? As SAPI 4 is being deprecated, such a fix for SAPI 4 might not be worthy. |
If Sapi4 only has a year left in its life, it might not really be worth it. But the changes brought about by #17592 will be exciting. |
As mentioned above, Microsoft's attitude towards the development and openness of the TTS interface is somewhat chaotic, as it seems too hasty for Microsoft to decide to abandon it without having built-in SAPI4. From the actual user experience, neither SAPI4 nor SAPI5 supports the WASAPI driver of the Wave Player, but the response efficiency and speed differ greatly. The call efficiency of SAPI4 is significantly better than that of the SAPI5 voice library. I don't know if this is because Microsoft's development is not focused, I'm not sure. But at least it can be confirmed that all the SAPI4 voice libraries available on the market are faster than those of SAPI5, and much faster! |
When SAPI5 starts to use WASAPI, the responsiveness of SAPI5 voices can be on par with OneCore voices. So I suspect that it's the audio output system of SAPI5 that makes it slow. The voices themselves are not that bad. But both SAPI4 and SAPI5 use WinMM, so it's weird. |
@LeonarddeR I've switched to using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greate overall, just some minor suggestions.
In Japan, there is a SAPI4 speech engine developed more than 20 years ago that is still preferred by users today. According to recent server logs of the Japanese version of NVDA, 7% of users who have opted to send data are using the SAPI4 driver. It is important to clearly communicate the necessity of discontinuing SAPI4 support, as well as the benefits that will come in exchange for its removal. |
source/synthDrivers/sapi4.py
Outdated
queueHandler.queueFunction(queueHandler.eventQueue, impl) | ||
|
||
|
||
if not globalVars.appArgs.secure: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's important this is announced in other forms of secure mode too, as some users are daily drivers of this, and it's important secure context users get warned about this. they can get their admin to disable it by disabling secure mode temporarily.
I just don't think it should be done on secure screens (e.g password, UAC), as it is a forever nag because a user wouldn't be able to save the settings on secure screens directly. With this new behaviour of saving the variable, is it still saved the same way? does the nag always happen on secure mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The flag is still saved to config, it's just now done via the SAPI4 SynthDriver
rather than as part of the config schema directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've now updated it to show the warning in all cases except when running on a secure desktop.
Co-authored-by: Sean Budd <[email protected]>
Link to issue number:
Fixes #17516
Summary of the issue:
After the move to exclusively Windows core audio APIs, the SAPI4 driver stopped working.
Description of user facing changes
The SAPI4 driver works again.
A warning is shown the first time the user uses SAPI4 informing them that it is deprecated.
Description of development approach
Implemented a function to translate between MMDevice Endpoint IDs and WaveOut device IDs, based on this Microsoft code sample.
Added a config key,
speech.hasSapi4WarningBeenShown
, which defaults to False.Added a synthChanged callback that shows a dialog when the synth is set to SAPI4 if this config key is False and this is not a fallback synthesizer.
Testing strategy:
Ran NVDA, and used it with SAPI4. Changed the audio output device to ensure audio was routed as expected.
Known issues with pull request:
When first updating to a version with this PR merged, if the user uses SAPI4 as their primary speech synth, they will be warned about its deprecation in the launcher and when they first start the newly updated NVDA. This is unavoidable as we don't save config from the launcher.
The dialog is only shown once per config profile, so may be missed by some users.
Other options I have considered include:
The warning dialog is shown after SAPI4 is loaded. In the instance that the user is already using SAPI4, this is correct behaviour. In the case of switching to SAPI4, perhaps a dialog should appear before we terminate the current synth and initialise SAPI4.
Code Review Checklist:
@coderabbitai summary