forked from barome/AK-xGenesis
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reset leds driver to matr1x version #2
Open
justin0406
wants to merge
63
commits into
brothaedhung:ak-xGenesis-443-aosp-gee
Choose a base branch
from
justin0406:ak-xGenesis-443-aosp-gee
base: ak-xGenesis-443-aosp-gee
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
reset leds driver to matr1x version #2
justin0406
wants to merge
63
commits into
brothaedhung:ak-xGenesis-443-aosp-gee
from
justin0406:ak-xGenesis-443-aosp-gee
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: justin0406 <[email protected]>
lib, net: make isodigit() public and use it Andy Shevchenko random: add debugging code to detect early use of get_random_bytes() Theodore Ts'o random: initialize the last_time field in struct timer_rand_state Theodore Ts'o random: don't zap entropy count in rand_initialize() Theodore Ts'o random: printk notifications for urandom pool initialization Theodore Ts'o random: make add_timer_randomness() fill the nonblocking pool first Theodore Ts'o random: convert DEBUG_ENT to tracepoints Theodore Ts'o random: push extra entropy to the output pools Theodore Ts'o random: drop trickle mode Theodore Ts'o random: adjust the generator polynomials in the mixing function slightly Theodore Ts'o random: speed up the fast_mix function by a factor of four Theodore Ts'o random: cap the rate which the /dev/urandom pool gets reseeded Theodore Ts'o random: optimize the entropy_store structure Theodore Ts'o random: optimize spinlock use in add_device_randomness() Theodore Ts'o random: fix the tracepoint for get_random_bytes(_arch) Theodore Ts'o random: account for entropy loss due to overwrites H. Peter Anvin random: allow fractional bits to be tracked H. Peter Anvin random: statically compute poolbitshift, poolbytes, poolbits H. Peter Anvin random: mix in architectural randomness earlier in extract_buf() Theodore Ts'o random: allow architectures to optionally define random_get_entropy() Theodore Ts'o random: run random_int_secret_init() run after all late_initcalls Theodore Ts'o char: Convert use of typedef ctl_table to struct ctl_table Joe Perches 1 random: fix accounting race condition with lockless irq entropy_count update Jiri Kosina drivers/char/random.c: fix priming of last_data Jarod Wilson lib/string_helpers: introduce generic string_unescape Andy Shevchenko random: fix locking dependency with the tasklist_lock Theodore Ts'o locking: Various static lock initializer fixes Thomas Gleixner random: prime last_data value per fips requirements Jarod Wilson random: fix debug format strings Jiri Kosina random: make it possible to enable debugging without rebuild Jiri Kosina random: mix in architectural randomness in extract_buf() H. Peter Anvin random: Add comment to random_initialize() Tony Luck random: add tracepoints for easier debugging and verification Theodore Ts'o Conflicts: include/trace/events/random.h
Signed-off-by: CallMeAldy <[email protected]>
…ving possible value for this tunable. Basically according to http://elinux.org/images/1/1d/Comparing_Power_Saving_Techniques_For_Multicore_ARM_Platforms.pdf this tunable provides serious power saving improvements close to the power savings of hotpluging cores. Since in this Kernel we're using always a minimum of 2 cpus online at any time during screen off we might actually benefit from this because this feature packs tasks together and try to bind them to cpu0, which in theory will let cpu1 idle longer, thus improving battery life. Signed-off-by: franciscofranco <[email protected]> Conflicts: kernel/sched/core.c
Signed-off-by: Paul Reioux <[email protected]>
bump version to 3.6 Signed-off-by: Paul Reioux <[email protected]>
mutex is not released while checking for null pdata which can lead to possible deadlock condition. Signed-off-by: Alok Chauhan <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
In process_backlog the input_pkt_queue is only checked once for new packets and quota is artificially reduced to reflect precisely the number of packets on the input_pkt_queue so that the loop exits appropriately. This patches changes the behavior to be more straightforward and less convoluted. Packets are processed until either the quota is met or there are no more packets to process. This patch seems to provide a small, but noticeable performance improvement. The performance improvement is a result of staying in the process_backlog loop longer which can reduce number of IPI's. Performance data using super_netperf TCP_RR with 200 flows: Before fix: 88.06% CPU utilization 125/190/309 90/95/99% latencies 1.46808e+06 tps 1145382 intrs.sec. With fix: 87.73% CPU utilization 122/183/296 90/95/99% latencies 1.4921e+06 tps 1021674.30 intrs./sec. Signed-off-by: Tom Herbert <[email protected]> Acked-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
Any user process callers of wait_for_completion() except global init process might be chosen by the OOM killer while waiting for completion() call by some other process which does memory allocation. See CVE-2012-4398 "kernel: request_module() OOM local DoS" can happen. When such users are chosen by the OOM killer when they are waiting for completion() in TASK_UNINTERRUPTIBLE, the system will be kept stressed due to memory starvation because the OOM killer cannot kill such users. kthread_create() is one of such users and this patch fixes the problem for kthreadd by making kthread_create() killable - the same approach used for fixing CVE-2012-4398. Signed-off-by: Tetsuo Handa <[email protected]> Cc: Oleg Nesterov <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
In the presence of memoryless nodes, numa_node_id() will return the current CPU's NUMA node, but that may not be where we expect to allocate from memory from. Instead, we should rely on the fallback code in the memory allocator itself, by using NUMA_NO_NODE. Also, when calling kthread_create_on_node(), use the nearest node with memory to the cpu in question, rather than the node it is running on. Signed-off-by: Nishanth Aravamudan <[email protected]> Reviewed-by: Christoph Lameter <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Anton Blanchard <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Jan Kara <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tetsuo Handa <[email protected]> Cc: Wanpeng Li <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Ben Herrenschmidt <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
Commit ("kthread: make kthread_create() killable") meant for allowing kthread_create() to abort as soon as killed by the OOM-killer. But returning -ENOMEM is wrong if killed by SIGKILL from userspace. Change kthread_create() to return -EINTR upon SIGKILL. Signed-off-by: Tetsuo Handa <[email protected]> Cc: Oleg Nesterov <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: <[email protected]> [3.13+] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
If the worker is already executing a work item when another is queued, we can safely skip wakeup without worrying about stalling queue thus avoiding waking up the busy worker spuriously. Spurious wakeups should be fine but still isn't nice and avoiding it is trivial here. tj: Updated description. Signed-off-by: Lai Jiangshan <[email protected]> Signed-off-by: Tejun Heo <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
ARRAY_SIZE is more concise to use when the size of an array is divided by the size of its type or the size of its first element. The Coccinelle semantic patch that makes this change is as follows: // <smpl> @@ type T; T[] E; @@ - (sizeof(E)/sizeof(E[...])) + ARRAY_SIZE(E) // </smpl> Signed-off-by: Himangi Saraogi <[email protected]> Signed-off-by: Paul Moore <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
It's possible that the caller passed a NULL for scontext. However if this is a defered mapping we might still attempt to call *scontext=kstrdup(). This is bad. Instead just return the len. Signed-off-by: Eric Paris <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
On architectures where cputime_t is 64 bit type, is possible to trigger divide by zero on do_div(temp, (__force u32) total) line, if total is a non zero number but has lower 32 bit's zeroed. Removing casting is not a good solution since some do_div() implementations do cast to u32 internally. This problem can be triggered in practice on very long lived processes: PID: 2331 TASK: ffff880472814b00 CPU: 2 COMMAND: "oraagent.bin" #0 [ffff880472a51b70] machine_kexec at ffffffff8103214b brothaedhung#1 [ffff880472a51bd0] crash_kexec at ffffffff810b91c2 brothaedhung#2 [ffff880472a51ca0] oops_end at ffffffff814f0b00 brothaedhung#3 [ffff880472a51cd0] die at ffffffff8100f26b #4 [ffff880472a51d00] do_trap at ffffffff814f03f4 #5 [ffff880472a51d60] do_divide_error at ffffffff8100cfff #6 [ffff880472a51e00] divide_error at ffffffff8100be7b [exception RIP: thread_group_times+0x56] RIP: ffffffff81056a16 RSP: ffff880472a51eb8 RFLAGS: 00010046 RAX: bc3572c9fe12d194 RBX: ffff880874150800 RCX: 0000000110266fad RDX: 0000000000000000 RSI: ffff880472a51eb8 RDI: 001038ae7d9633dc RBP: ffff880472a51ef8 R8: 00000000b10a3a64 R9: ffff880874150800 R10: 00007fcba27ab680 R11: 0000000000000202 R12: ffff880472a51f08 R13: ffff880472a51f10 R14: 0000000000000000 R15: 0000000000000007 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffff880472a51f00] do_sys_times at ffffffff8108845d #8 [ffff880472a51f40] sys_times at ffffffff81088524 #9 [ffff880472a51f80] system_call_fastpath at ffffffff8100b0f2 RIP: 0000003808caac3a RSP: 00007fcba27ab6d8 RFLAGS: 00000202 RAX: 0000000000000064 RBX: ffffffff8100b0f2 RCX: 0000000000000000 RDX: 00007fcba27ab6e0 RSI: 000000000076d58e RDI: 00007fcba27ab6e0 RBP: 00007fcba27ab700 R8: 0000000000000020 R9: 000000000000091b R10: 00007fcba27ab680 R11: 0000000000000202 R12: 00007fff9ca41940 R13: 0000000000000000 R14: 00007fcba27ac9c0 R15: 00007fff9ca41940 ORIG_RAX: 0000000000000064 CS: 0033 SS: 002b Signed-off-by: Stanislaw Gruszka <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
In cgroup_exit() put_css_set_taskexit() is called without any lock, which might lead to accessing a freed cgroup: thread1 thread2 --------------------------------------------- exit() cgroup_exit() put_css_set_taskexit() atomic_dec(cgrp->count); rmdir(); /* not safe !! */ check_for_release(cgrp); rcu_read_lock() can be used to make sure the cgroup is alive. Signed-off-by: Li Zefan <[email protected]> Signed-off-by: Tejun Heo <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
msmsdcc_request_end when called with interrupts disabled inside an interrupt handler adds an unnecessary 5ms delay on command timeouts, which impacts system performance. Remove this delay. Change-Id: I90aec109fe84f7b9f9a9362b5ee2d8d1310833af Signed-off-by: Venkat Gopalakrishnan <[email protected]>
Capturing BSV bit before ID test msm_otg_set_vbus_state() is leading to return suspend call without entering USB into LPM. In host mode since there is no need for BSV check, clear the BSV bit so that USB suspend can go through. This happens when a target enters host mode with no devices attached, USB OTG will never enter LPM, as the BSV bit will always be set. CRs-Fixed: 493539 Change-Id: I491b7ec058e799ca4e9aa6b34c44ea1908e8dbf0 Signed-off-by: Sujeet Kumar <[email protected]>
USB PHY is put into non-driving mode and pull-down resistors on DP and DM are disabled before starting the charger detection. The PHY is put into driving mode at the end of the charger detection without enabling the pull-down resistors. Due to this DP is pulled high when a dedicated charger is connected. USB controller is not allowing PHY suspend if line state is non-zero when charger is connected. Enable pull-down resistors at the end of charger detection to resolve this issue. CRs-Fixed: 509608 Change-Id: Ic1fc663b397e29e62d526b4146e92c93c84240ac Signed-off-by: Pavankumar Kondeti <[email protected]>
Protecting cgroup lists to fix corruption due to race since freeing of css_set now in new worker thread, free_css_set_work() CRs-fixed: 715355 Change-Id: Iec4d7be0c372a17487f96bcdc1542afaff79dd1b Signed-off-by: Srinivasarao P <[email protected]>
There is an unhandled corner case with the current check which has recently been hit. The nbytes argument may be large enough for the range being dumped to cross a boundary between page-table mappings, and dump addresses which are vmalloc'd in the neighboring mapping. Check every address being dumped to see if its vmalloc'd, not just the address passed to show_data(). CRs-Fixed: 625442 Change-Id: I255b093808177321f5202dcacea337ee333bfc63 Signed-off-by: Matt Wagantall <[email protected]> Signed-off-by: franciscofranco <[email protected]> Signed-off-by: Chet Kener <[email protected]>
When a thread exits mix it's cputime (userspace + kernelspace) to the entropy pool. We don't know how "random" this is, so we use add_device_randomness that doesn't mess with entropy count. Signed-off-by: Nick Kossifidis <[email protected]> Signed-off-by: Theodore Ts'o <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
Signed-off-by: Greg Price <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
There's only one function here now, as uuid_strategy is long gone. Also make the bit about "If accesses via ..." clearer. Signed-off-by: Greg Price <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
After this remark was written, commit d2e7c96af added a use of arch_get_random_long() inside the get_random_bytes codepath. The main point stands, but it needs to be reworded. Signed-off-by: Greg Price <[email protected]> Signed-off-by: "Theodore Ts'o" <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
To help assuage the fears of those who think the NSA can introduce a massive hack into the instruction decode and out of order execution engine in the CPU without hundreds of Intel engineers knowing about it (only one of which woud need to have the conscience and courage of Edward Snowden to spill the beans to the public), use the HWRNG to initialize the SHA starting value, instead of xor'ing it in afterwards. Signed-off-by: "Theodore Ts'o" <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
Commit d7e58560111160f5478e13b2387d6b20c4102937 ("random: simplify accounting logic") simplified things too much, in that it allows the following to trigger an overflow that results in a BUG_ON crash: dd if=/dev/urandom of=/dev/zero bs=67108707 count=1 Thanks to Peter Zihlstra for discovering the crash, and Hannes Frederic for analyzing the root cause. Signed-off-by: "Theodore Ts'o" <[email protected]> Reported-by: Peter Zijlstra <[email protected]> Reported-by: Hannes Frederic Sowa <[email protected]> Cc: Greg Price <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
This typedef is unnecessary and should just be removed. Signed-off-by: Joe Perches <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
Commit 0fb7a01af5b0 "random: simplify accounting code", introduced in v3.15, has a very nasty accounting problem when the entropy pool has has fewer bytes of entropy than the number of requested reserved bytes. In that case, "have_bytes - reserved" goes negative, and since size_t is unsigned, the expression: ibytes = min_t(size_t, ibytes, have_bytes - reserved); ... does not do the right thing. This is rather bad, because it defeats the catastrophic reseeding feature in the xfer_secondary_pool() path. It also can cause the "BUG: spinlock trylock failure on UP" for some kernel configurations when prandom_reseed() calls get_random_bytes() in the early init, since when the entropy count gets corrupted, credit_entropy_bits() erroneously believes that the nonblocking pool has been fully initialized (when in fact it is not), and so it calls prandom_reseed(true) recursively leading to the spinlock BUG. The logic is *not* the same it was originally, but in the cases where it matters, the behavior is the same, and the resulting code is hopefully easier to read and understand. Fixes: "random: simplify accounting code" Signed-off-by: Theodore Ts'o <[email protected]> Cc: Greg Price <[email protected]> Cc: [email protected] #v3.15 Signed-off-by: Pranav Vashi <[email protected]>
The expression entropy_count -= ibytes << (ENTROPY_SHIFT + 3) could actually increase entropy_count if during assignment of the unsigned expression on the RHS (mind the -=) we reduce the value modulo 2^width(int) and assign it to entropy_count. Trinity found this. [ Commit modified by tytso to add an additional safety check for a negative entropy_count -- which should never happen, and to also add an additional paranoia check to prevent overly large count values to be passed into urandom_read(). ] Reported-by: Dave Jones <[email protected]> Signed-off-by: Hannes Frederic Sowa <[email protected]> Signed-off-by: Theodore Ts'o <[email protected]> Cc: [email protected] Signed-off-by: Pranav Vashi <[email protected]>
This reverts commit 9ad00d0452afe0eb0c02548f42f70756b847e867. The patch did not fix the list corruption, so reverting it. Change-Id: Ie210269c231972680d28f9cd6758be035c8b8c8e Signed-off-by: Srinivasarao P <[email protected]>
Accessing the css_set object should be protected with css_set_lock. NULL check is required before accesing object. CRs-fixed: 723762 Change-Id: Ia3a0314a04419889d5002a8f2bf2c1fe9dfa3671 Signed-off-by: Swetha Chikkaboraiah <[email protected]> I have this Commit already: dorimanx/Dorimanx-LG-G2-D802-Kernel@57603bb so i have just merged the cg && check, from this commit https://www.codeaurora.org/cgit/quic/la/kernel/msm/commit/?h=kk_3.5&id=4ac97cf6dc2fb87017a51ae533b932f5a2013c9f
Defined a new header file relaxed.h, which uses generic definitions of some macros used by arm64 for improving power efficiency. bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-on: http://git-master/r/398766 (cherry picked from commit a96e59b1959f3ee216503b4f9df3cb75f7093ed6) Reviewed-on: http://git-master/r/422211 Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Defining relaxed version of idle_cpu, which uses macro cpu_relaxed_read_long, that will be used to enhance power efficiency. bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Sri Krishna Chowdary <[email protected]> Reviewed-by: Alexander Van Brunt <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Defined a new header file relaxed.h, which contains basic macros which will be used for improving power efficiency for arm64. bug 1440421 Signed-off-by: Sumit Singh <[email protected]> GVS: Gerrit_Virtual_Submit Reviewed-by: Bharat Nihalani <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Defining relaxed version of llist_empty as llist_empty_relaxed, which will be used for power-optimization. bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Sri Krishna Chowdary <[email protected]> Reviewed-by: Alexander Van Brunt <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Defining a new header file and adding architecture independent macros. Using these macros we are optimizing power usage on ARM64. bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Alexander Van Brunt <[email protected]> Reviewed-by: Sri Krishna Chowdary <[email protected]> Reviewed-by: Bharat Nihalani <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Here we are including asm-generic/processor.h because it has defined some generic macros which can be used by any architecture for optimaztion of power usage. bug 1440421 Change-Id: I39db944d348b3dad9235b76d8c7ea3867c0006ce Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Alexander Van Brunt <[email protected]> Reviewed-by: Peng Du <[email protected]> Reviewed-by: Sri Krishna Chowdary <[email protected]> Reviewed-by: Bharat Nihalani <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Here we are trying to reduce power usage through the use of macros cpu_relaxed_read and relaxed version of idle_cpu(). Bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Sri Krishna Chowdary <[email protected]> Reviewed-by: Alexander Van Brunt <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Trying to improve the power efficiency in linux/seqlock.h, using macros cpu_relaxed_read and cpu_read_relax. Bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Sri Krishna Chowdary <[email protected]> Reviewed-by: Alexander Van Brunt <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Using macros cpu_relaxed_read, cpu_relaxed_read_long and cpu_read_relax in core.c to improve power efficiency. Bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Bharat Nihalani <[email protected]> Tested-by: Bharat Nihalani <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
Defining relaxed version of hrtimer_callback_running(), which will be used to improve power efficiency through the use of macro cpu_relaxed_read_long. Bug 1440421 Signed-off-by: Sumit Singh <[email protected]> Reviewed-by: Bharat Nihalani <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
There are reports that -fconserve-stack misaligns variables on the stack. Disable it for ARM to work around this gcc bug. v2: Move top level flags definition up Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Catalin Marinas <[email protected]> Signed-off-by: Pranav Vashi <[email protected]> Signed-off-by: Chet Kener <[email protected]>
As pointed out by Arnd Bergmann, this fixes a couple of issues but will increase code size: The original macro user_termio_to_kernel_termios was not endian safe. It used an unsigned short ptr to access the low bits in a 32-bit word. Both user_termio_to_kernel_termios and kernel_termios_to_user_termio are missing error checking on put_user/get_user and copy_to/from_user. Signed-off-by: Rob Herring <[email protected]> Reviewed-by: Nicolas Pitre <[email protected]> Tested-by: Thomas Petazzoni <[email protected]> Reviewed-by: Arnd Bergmann <[email protected]> Signed-off-by: Russell King <[email protected]> Signed-off-by: franciscofranco <[email protected]> Conflicts: arch/arm/include/asm/Kbuild
Signed-off-by: franciscofranco <[email protected]>
The system uses global_dirtyable_memory() to calculate number of dirtyable pages/pages that can be allocated to the page cache. A bug causes an underflow thus making the page count look like a big unsigned number. This in turn confuses the dirty writeback throttling to aggressively write back pages as they become dirty (usually 1 page at a time). Fix is to ensure there is no underflow while doing the math. Signed-off-by: Sonny Rao <[email protected]> Signed-off-by: Puneet Kumar <[email protected]> BUG=chrome-os-partner:16011 TEST=Manual; boot kernel, powerwash, login with testaccount and make sure no jank occurs on sync of applications Change-Id: I614e7c3156e014f0f28a4ef9bdd8cb8a2cd07b2a Reviewed-on: https://gerrit.chromium.org/gerrit/37612 Reviewed-by: Kees Cook <[email protected]> Reviewed-by: Olof Johansson <[email protected]> Commit-Ready: Puneet Kumar <[email protected]> Reviewed-by: Puneet Kumar <[email protected]> Tested-by: Puneet Kumar <[email protected]>
migration_call() will do all the things that update_runtime() does. So it seems update_runtime() is a redundant notifier, remove it. Furthermore, there is potential risk that the current code will catch BUG_ON at line 687 of rt.c when do cpu hotplug while there are realtime threads running because of enable runtime twice. Change-Id: I0fdad8d5a1cebb845d3f308b205dbd6517c3e4de Cc: [email protected] Signed-off-by: Neil Zhang <[email protected]> Reviewed-on: http://git-master/r/215596 (cherry picked from commit 8f646de983f24361814d9a6ca679845fb2265807) Reviewed-on: http://git-master/r/223067 Reviewed-by: Peter Boonstoppel <[email protected]> Tested-by: Peter Boonstoppel <[email protected]> Reviewed-by: Paul Walmsley <[email protected]> Reviewed-by: Automatic_Commit_Validation_User GVS: Gerrit_Virtual_Submit Reviewed-by: Diwakar Tundlam <[email protected]>
justin0406
pushed a commit
to justin0406/AK-GEE
that referenced
this pull request
Oct 9, 2014
On architectures where cputime_t is 64 bit type, is possible to trigger divide by zero on do_div(temp, (__force u32) total) line, if total is a non zero number but has lower 32 bit's zeroed. Removing casting is not a good solution since some do_div() implementations do cast to u32 internally. This problem can be triggered in practice on very long lived processes: PID: 2331 TASK: ffff880472814b00 CPU: 2 COMMAND: "oraagent.bin" #0 [ffff880472a51b70] machine_kexec at ffffffff8103214b brothaedhung#1 [ffff880472a51bd0] crash_kexec at ffffffff810b91c2 brothaedhung#2 [ffff880472a51ca0] oops_end at ffffffff814f0b00 brothaedhung#3 [ffff880472a51cd0] die at ffffffff8100f26b #4 [ffff880472a51d00] do_trap at ffffffff814f03f4 #5 [ffff880472a51d60] do_divide_error at ffffffff8100cfff #6 [ffff880472a51e00] divide_error at ffffffff8100be7b [exception RIP: thread_group_times+0x56] RIP: ffffffff81056a16 RSP: ffff880472a51eb8 RFLAGS: 00010046 RAX: bc3572c9fe12d194 RBX: ffff880874150800 RCX: 0000000110266fad RDX: 0000000000000000 RSI: ffff880472a51eb8 RDI: 001038ae7d9633dc RBP: ffff880472a51ef8 R8: 00000000b10a3a64 R9: ffff880874150800 R10: 00007fcba27ab680 R11: 0000000000000202 R12: ffff880472a51f08 R13: ffff880472a51f10 R14: 0000000000000000 R15: 0000000000000007 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffff880472a51f00] do_sys_times at ffffffff8108845d #8 [ffff880472a51f40] sys_times at ffffffff81088524 #9 [ffff880472a51f80] system_call_fastpath at ffffffff8100b0f2 RIP: 0000003808caac3a RSP: 00007fcba27ab6d8 RFLAGS: 00000202 RAX: 0000000000000064 RBX: ffffffff8100b0f2 RCX: 0000000000000000 RDX: 00007fcba27ab6e0 RSI: 000000000076d58e RDI: 00007fcba27ab6e0 RBP: 00007fcba27ab700 R8: 0000000000000020 R9: 000000000000091b R10: 00007fcba27ab680 R11: 0000000000000202 R12: 00007fff9ca41940 R13: 0000000000000000 R14: 00007fcba27ac9c0 R15: 00007fff9ca41940 ORIG_RAX: 0000000000000064 CS: 0033 SS: 002b Signed-off-by: Stanislaw Gruszka <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
brothaedhung
pushed a commit
that referenced
this pull request
Nov 22, 2014
On architectures where cputime_t is 64 bit type, is possible to trigger divide by zero on do_div(temp, (__force u32) total) line, if total is a non zero number but has lower 32 bit's zeroed. Removing casting is not a good solution since some do_div() implementations do cast to u32 internally. This problem can be triggered in practice on very long lived processes: PID: 2331 TASK: ffff880472814b00 CPU: 2 COMMAND: "oraagent.bin" #0 [ffff880472a51b70] machine_kexec at ffffffff8103214b #1 [ffff880472a51bd0] crash_kexec at ffffffff810b91c2 #2 [ffff880472a51ca0] oops_end at ffffffff814f0b00 #3 [ffff880472a51cd0] die at ffffffff8100f26b #4 [ffff880472a51d00] do_trap at ffffffff814f03f4 #5 [ffff880472a51d60] do_divide_error at ffffffff8100cfff #6 [ffff880472a51e00] divide_error at ffffffff8100be7b [exception RIP: thread_group_times+0x56] RIP: ffffffff81056a16 RSP: ffff880472a51eb8 RFLAGS: 00010046 RAX: bc3572c9fe12d194 RBX: ffff880874150800 RCX: 0000000110266fad RDX: 0000000000000000 RSI: ffff880472a51eb8 RDI: 001038ae7d9633dc RBP: ffff880472a51ef8 R8: 00000000b10a3a64 R9: ffff880874150800 R10: 00007fcba27ab680 R11: 0000000000000202 R12: ffff880472a51f08 R13: ffff880472a51f10 R14: 0000000000000000 R15: 0000000000000007 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffff880472a51f00] do_sys_times at ffffffff8108845d #8 [ffff880472a51f40] sys_times at ffffffff81088524 #9 [ffff880472a51f80] system_call_fastpath at ffffffff8100b0f2 RIP: 0000003808caac3a RSP: 00007fcba27ab6d8 RFLAGS: 00000202 RAX: 0000000000000064 RBX: ffffffff8100b0f2 RCX: 0000000000000000 RDX: 00007fcba27ab6e0 RSI: 000000000076d58e RDI: 00007fcba27ab6e0 RBP: 00007fcba27ab700 R8: 0000000000000020 R9: 000000000000091b R10: 00007fcba27ab680 R11: 0000000000000202 R12: 00007fff9ca41940 R13: 0000000000000000 R14: 00007fcba27ac9c0 R15: 00007fff9ca41940 ORIG_RAX: 0000000000000064 CS: 0033 SS: 002b Signed-off-by: Stanislaw Gruszka <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Pranav Vashi <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.