In the Linux kernel, the following vulnerability has been resolved:
blk-cgroup: fix list corruption from resetting io stat
Since commit 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()"),
each iostat instance is added to blkcg percpu list, so blkcg_reset_stats()
can't reset the stat instance by memset(), otherwise the llist may be
corrupted.
Fix the issue by only resetting the counter part.
In the Linux kernel, the following vulnerability has been resolved:
drm/xe: Only use reserved BCS instances for usm migrate exec queue
The GuC context scheduling queue is 2 entires deep, thus it is possible
for a migration job to be stuck behind a fault if migration exec queue
shares engines with user jobs. This can deadlock as the migrate exec
queue is required to service page faults. Avoid deadlock by only using
reserved BCS instances for usm migrate exec queue.
(cherry picked from commit 04f4a70a183a688a60fe3882d6e4236ea02cfc67)
In the Linux kernel, the following vulnerability has been resolved:
fpga: manager: add owner module and take its refcount
The current implementation of the fpga manager assumes that the low-level
module registers a driver for the parent device and uses its owner pointer
to take the module's refcount. This approach is problematic since it can
lead to a null pointer dereference while attempting to get the manager if
the parent device does not have a driver.
To address this problem, add a module owner pointer to the fpga_manager
struct and use it to take the module's refcount. Modify the functions for
registering the manager to take an additional owner module parameter and
rename them to avoid conflicts. Use the old function names for helper
macros that automatically set the module that registers the manager as the
owner. This ensures compatibility with existing low-level control modules
and reduces the chances of registering a manager without setting the owner.
Also, update the documentation to keep it consistent with the new interface
for registering an fpga manager.
Other changes: opportunistically move put_device() from __fpga_mgr_get() to
fpga_mgr_get() and of_fpga_mgr_get() to improve code clarity since the
manager device is taken in these functions.
In the Linux kernel, the following vulnerability has been resolved:
fpga: bridge: add owner module and take its refcount
The current implementation of the fpga bridge assumes that the low-level
module registers a driver for the parent device and uses its owner pointer
to take the module's refcount. This approach is problematic since it can
lead to a null pointer dereference while attempting to get the bridge if
the parent device does not have a driver.
To address this problem, add a module owner pointer to the fpga_bridge
struct and use it to take the module's refcount. Modify the function for
registering a bridge to take an additional owner module parameter and
rename it to avoid conflicts. Use the old function name for a helper macro
that automatically sets the module that registers the bridge as the owner.
This ensures compatibility with existing low-level control modules and
reduces the chances of registering a bridge without setting the owner.
Also, update the documentation to keep it consistent with the new interface
for registering an fpga bridge.
Other changes: opportunistically move put_device() from __fpga_bridge_get()
to fpga_bridge_get() and of_fpga_bridge_get() to improve code clarity since
the bridge device is taken in these functions.
In the Linux kernel, the following vulnerability has been resolved:
f2fs: compress: don't allow unaligned truncation on released compress inode
f2fs image may be corrupted after below testcase:
- mkfs.f2fs -O extra_attr,compression -f /dev/vdb
- mount /dev/vdb /mnt/f2fs
- touch /mnt/f2fs/file
- f2fs_io setflags compression /mnt/f2fs/file
- dd if=/dev/zero of=/mnt/f2fs/file bs=4k count=4
- f2fs_io release_cblocks /mnt/f2fs/file
- truncate -s 8192 /mnt/f2fs/file
- umount /mnt/f2fs
- fsck.f2fs /dev/vdb
[ASSERT] (fsck_chk_inode_blk:1256) --> ino: 0x5 has i_blocks: 0x00000002, but has 0x3 blocks
[FSCK] valid_block_count matching with CP [Fail] [0x4, 0x5]
[FSCK] other corrupted bugs [Fail]
The reason is: partial truncation assume compressed inode has reserved
blocks, after partial truncation, valid block count may change w/o
.i_blocks and .total_valid_block_count update, result in corruption.
This patch only allow cluster size aligned truncation on released
compress inode for fixing.
A Path Traversal vulnerability exists in the parisneo/lollms-webui, specifically within the 'add_reference_to_local_mode' function due to the lack of input sanitization. This vulnerability affects versions v9.6 to the latest. By exploiting this vulnerability, an attacker can predict the folders, subfolders, and files present on the victim's computer. The vulnerability is present in the way the application handles the 'path' parameter in HTTP requests to the '/add_reference_to_local_model' endpoint.
In the Linux kernel, the following vulnerability has been resolved:
dma-buf/sw-sync: don't enable IRQ from sync_print_obj()
Since commit a6aa8fca4d79 ("dma-buf/sw-sync: Reduce irqsave/irqrestore from
known context") by error replaced spin_unlock_irqrestore() with
spin_unlock_irq() for both sync_debugfs_show() and sync_print_obj() despite
sync_print_obj() is called from sync_debugfs_show(), lockdep complains
inconsistent lock state warning.
Use plain spin_{lock,unlock}() for sync_print_obj(), for
sync_debugfs_show() is already using spin_{lock,unlock}_irq().
In the Linux kernel, the following vulnerability has been resolved:
SUNRPC: Fix loop termination condition in gss_free_in_token_pages()
The in_token->pages[] array is not NULL terminated. This results in
the following KASAN splat:
KASAN: maybe wild-memory-access in range [0x04a2013400000008-0x04a201340000000f]
Improper Input Validation vulnerability in ABB 800xA Base.
An attacker who successfully exploited this
vulnerability could cause services to crash by sending specifically crafted messages.
This issue affects 800xA Base: from 6.0.0 through 6.1.1-2.
In the Linux kernel, the following vulnerability has been resolved:
enic: Validate length of nl attributes in enic_set_vf_port
enic_set_vf_port assumes that the nl attribute IFLA_PORT_PROFILE
is of length PORT_PROFILE_MAX and that the nl attributes
IFLA_PORT_INSTANCE_UUID, IFLA_PORT_HOST_UUID are of length PORT_UUID_MAX.
These attributes are validated (in the function do_setlink in rtnetlink.c)
using the nla_policy ifla_port_policy. The policy defines IFLA_PORT_PROFILE
as NLA_STRING, IFLA_PORT_INSTANCE_UUID as NLA_BINARY and
IFLA_PORT_HOST_UUID as NLA_STRING. That means that the length validation
using the policy is for the max size of the attributes and not on exact
size so the length of these attributes might be less than the sizes that
enic_set_vf_port expects. This might cause an out of bands
read access in the memcpys of the data of these
attributes in enic_set_vf_port.
In the Linux kernel, the following vulnerability has been resolved:
greybus: lights: check return of get_channel_from_mode
If channel for the given node is not found we return null from
get_channel_from_mode. Make sure we validate the return pointer
before using it in two of the missing places.
This was originally reported in [0]:
Found by Linux Verification Center (linuxtesting.org) with SVACE.
[0] https://lore.kernel.org/all/20240301190425.120605-1-m.lobanov@rosalinux.ru
In the Linux kernel, the following vulnerability has been resolved:
f2fs: multidev: fix to recognize valid zero block address
As reported by Yi Zhang in mailing list [1], kernel warning was catched
during zbd/010 test as below:
./check zbd/010
zbd/010 (test gap zone support with F2FS) [failed]
runtime ... 3.752s
something found in dmesg:
[ 4378.146781] run blktests zbd/010 at 2024-02-18 11:31:13
[ 4378.192349] null_blk: module loaded
[ 4378.209860] null_blk: disk nullb0 created
[ 4378.413285] scsi_debug:sdebug_driver_probe: scsi_debug: trim
poll_queues to 0. poll_q/nr_hw = (0/1)
[ 4378.422334] scsi host15: scsi_debug: version 0191 [20210520]
dev_size_mb=1024, opts=0x0, submit_queues=1, statistics=0
[ 4378.434922] scsi 15:0:0:0: Direct-Access-ZBC Linux
scsi_debug 0191 PQ: 0 ANSI: 7
[ 4378.443343] scsi 15:0:0:0: Power-on or device reset occurred
[ 4378.449371] sd 15:0:0:0: Attached scsi generic sg5 type 20
[ 4378.449418] sd 15:0:0:0: [sdf] Host-managed zoned block device
...
(See '/mnt/tests/gitlab.com/api/v4/projects/19168116/repository/archive.zip/storage/blktests/blk/blktests/results/nodev/zbd/010.dmesg'
WARNING: CPU: 22 PID: 44011 at fs/iomap/iter.c:51
CPU: 22 PID: 44011 Comm: fio Not tainted 6.8.0-rc3+ #1
RIP: 0010:iomap_iter+0x32b/0x350
Call Trace:
<TASK>
__iomap_dio_rw+0x1df/0x830
f2fs_file_read_iter+0x156/0x3d0 [f2fs]
aio_read+0x138/0x210
io_submit_one+0x188/0x8c0
__x64_sys_io_submit+0x8c/0x1a0
do_syscall_64+0x86/0x170
entry_SYSCALL_64_after_hwframe+0x6e/0x76
Shinichiro Kawasaki helps to analyse this issue and proposes a potential
fixing patch in [2].
Quoted from reply of Shinichiro Kawasaki:
"I confirmed that the trigger commit is dbf8e63f48af as Yi reported. I took a
look in the commit, but it looks fine to me. So I thought the cause is not
in the commit diff.
I found the WARN is printed when the f2fs is set up with multiple devices,
and read requests are mapped to the very first block of the second device in the
direct read path. In this case, f2fs_map_blocks() and f2fs_map_blocks_cached()
modify map->m_pblk as the physical block address from each block device. It
becomes zero when it is mapped to the first block of the device. However,
f2fs_iomap_begin() assumes that map->m_pblk is the physical block address of the
whole f2fs, across the all block devices. It compares map->m_pblk against
NULL_ADDR == 0, then go into the unexpected branch and sets the invalid
iomap->length. The WARN catches the invalid iomap->length.
This WARN is printed even for non-zoned block devices, by following steps.
- Create two (non-zoned) null_blk devices memory backed with 128MB size each:
nullb0 and nullb1.
# mkfs.f2fs /dev/nullb0 -c /dev/nullb1
# mount -t f2fs /dev/nullb0 "${mount_dir}"
# dd if=/dev/zero of="${mount_dir}/test.dat" bs=1M count=192
# dd if="${mount_dir}/test.dat" of=/dev/null bs=1M count=192 iflag=direct
..."
So, the root cause of this issue is: when multi-devices feature is on,
f2fs_map_blocks() may return zero blkaddr in non-primary device, which is
a verified valid block address, however, f2fs_iomap_begin() treats it as
an invalid block address, and then it triggers the warning in iomap
framework code.
Finally, as discussed, we decide to use a more simple and direct way that
checking (map.m_flags & F2FS_MAP_MAPPED) condition instead of
(map.m_pblk != NULL_ADDR) to fix this issue.
Thanks a lot for the effort of Yi Zhang and Shinichiro Kawasaki on this
issue.
[1] https://lore.kernel.org/linux-f2fs-devel/CAHj4cs-kfojYC9i0G73PRkYzcxCTex=-vugRFeP40g_URGvnfQ@mail.gmail.com/
[2] https://lore.kernel.org/linux-f2fs-devel/gngdj77k4picagsfdtiaa7gpgnup6fsgwzsltx6milmhegmjff@iax2n4wvrqye/
In the Linux kernel, the following vulnerability has been resolved:
serial: max3100: Lock port->lock when calling uart_handle_cts_change()
uart_handle_cts_change() has to be called with port lock taken,
Since we run it in a separate work, the lock may not be taken at
the time of running. Make sure that it's taken by explicitly doing
that. Without it we got a splat:
WARNING: CPU: 0 PID: 10 at drivers/tty/serial/serial_core.c:3491 uart_handle_cts_change+0xa6/0xb0
...
Workqueue: max3100-0 max3100_work [max3100]
RIP: 0010:uart_handle_cts_change+0xa6/0xb0
...
max3100_handlerx+0xc5/0x110 [max3100]
max3100_work+0x12a/0x340 [max3100]
In the Linux kernel, the following vulnerability has been resolved:
serial: max3100: Update uart_driver_registered on driver removal
The removal of the last MAX3100 device triggers the removal of
the driver. However, code doesn't update the respective global
variable and after insmod β rmmod β insmod cycle the kernel
oopses:
max3100 spi-PRP0001:01: max3100_probe: adding port 0
BUG: kernel NULL pointer dereference, address: 0000000000000408
...
RIP: 0010:serial_core_register_port+0xa0/0x840
...
max3100_probe+0x1b6/0x280 [max3100]
spi_probe+0x8d/0xb0
Update the actual state so next time UART driver will be registered
again.
Hugo also noticed, that the error path in the probe also affected
by having the variable set, and not cleared. Instead of clearing it
move the assignment after the successfull uart_register_driver() call.
In the Linux kernel, the following vulnerability has been resolved:
vfio/pci: fix potential memory leak in vfio_intx_enable()
If vfio_irq_ctx_alloc() failed will lead to 'name' memory leak.
In the Linux kernel, the following vulnerability has been resolved:
dmaengine: idxd: Avoid unnecessary destruction of file_ida
file_ida is allocated during cdev open and is freed accordingly
during cdev release. This sequence is guaranteed by driver file
operations. Therefore, there is no need to destroy an already empty
file_ida when the WQ cdev is removed.
Worse, ida_free() in cdev release may happen after destruction of
file_ida per WQ cdev. This can lead to accessing an id in file_ida
after it has been destroyed, resulting in a kernel panic.
Remove ida_destroy(&file_ida) to address these issues.
In the Linux kernel, the following vulnerability has been resolved:
stm class: Fix a double free in stm_register_device()
The put_device(&stm->dev) call will trigger stm_device_release() which
frees "stm" so the vfree(stm) on the next line is a double free.
In the Linux kernel, the following vulnerability has been resolved:
fuse: clear FR_SENT when re-adding requests into pending list
The following warning was reported by lee bruce:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 8264 at fs/fuse/dev.c:300
fuse_request_end+0x685/0x7e0 fs/fuse/dev.c:300
Modules linked in:
CPU: 0 PID: 8264 Comm: ab2 Not tainted 6.9.0-rc7
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)
RIP: 0010:fuse_request_end+0x685/0x7e0 fs/fuse/dev.c:300
......
Call Trace:
<TASK>
fuse_dev_do_read.constprop.0+0xd36/0x1dd0 fs/fuse/dev.c:1334
fuse_dev_read+0x166/0x200 fs/fuse/dev.c:1367
call_read_iter include/linux/fs.h:2104 [inline]
new_sync_read fs/read_write.c:395 [inline]
vfs_read+0x85b/0xba0 fs/read_write.c:476
ksys_read+0x12f/0x260 fs/read_write.c:619
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xce/0x260 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
......
</TASK>
The warning is due to the FUSE_NOTIFY_RESEND notify sent by the write()
syscall in the reproducer program and it happens as follows:
(1) calls fuse_dev_read() to read the INIT request
The read succeeds. During the read, bit FR_SENT will be set on the
request.
(2) calls fuse_dev_write() to send an USE_NOTIFY_RESEND notify
The resend notify will resend all processing requests, so the INIT
request is moved from processing list to pending list again.
(3) calls fuse_dev_read() with an invalid output address
fuse_dev_read() will try to copy the same INIT request to the output
address, but it will fail due to the invalid address, so the INIT
request is ended and triggers the warning in fuse_request_end().
Fix it by clearing FR_SENT when re-adding requests into pending list.
In the Linux kernel, the following vulnerability has been resolved:
fs/ntfs3: Use 64 bit variable to avoid 32 bit overflow
For example, in the expression:
vbo = 2 * vbo + skip
In the Linux kernel, the following vulnerability has been resolved:
media: stk1160: fix bounds checking in stk1160_copy_video()
The subtract in this condition is reversed. The ->length is the length
of the buffer. The ->bytesused is how many bytes we have copied thus
far. When the condition is reversed that means the result of the
subtraction is always negative but since it's unsigned then the result
is a very high positive value. That means the overflow check is never
true.
Additionally, the ->bytesused doesn't actually work for this purpose
because we're not writing to "buf->mem + buf->bytesused". Instead, the
math to calculate the destination where we are writing is a bit
involved. You calculate the number of full lines already written,
multiply by two, skip a line if necessary so that we start on an odd
numbered line, and add the offset into the line.
To fix this buffer overflow, just take the actual destination where we
are writing, if the offset is already out of bounds print an error and
return. Otherwise, write up to buf->length bytes.
In the Linux kernel, the following vulnerability has been resolved:
nfc: nci: Fix uninit-value in nci_rx_work
syzbot reported the following uninit-value access issue [1]
nci_rx_work() parses received packet from ndev->rx_q. It should be
validated header size, payload size and total packet size before
processing the packet. If an invalid packet is detected, it should be
silently discarded.
In the Linux kernel, the following vulnerability has been resolved:
null_blk: fix null-ptr-dereference while configuring 'power' and 'submit_queues'
Writing 'power' and 'submit_queues' concurrently will trigger kernel
panic:
Test script:
modprobe null_blk nr_devices=0
mkdir -p /sys/kernel/config/nullb/nullb0
while true; do echo 1 > submit_queues; echo 4 > submit_queues; done &
while true; do echo 1 > power; echo 0 > power; done
Test result:
BUG: kernel NULL pointer dereference, address: 0000000000000148
Oops: 0000 [#1] PREEMPT SMP
RIP: 0010:__lock_acquire+0x41d/0x28f0
Call Trace:
<TASK>
lock_acquire+0x121/0x450
down_write+0x5f/0x1d0
simple_recursive_removal+0x12f/0x5c0
blk_mq_debugfs_unregister_hctxs+0x7c/0x100
blk_mq_update_nr_hw_queues+0x4a3/0x720
nullb_update_nr_hw_queues+0x71/0xf0 [null_blk]
nullb_device_submit_queues_store+0x79/0xf0 [null_blk]
configfs_write_iter+0x119/0x1e0
vfs_write+0x326/0x730
ksys_write+0x74/0x150
This is because del_gendisk() can concurrent with
blk_mq_update_nr_hw_queues():
nullb_device_power_store nullb_apply_submit_queues
null_del_dev
del_gendisk
nullb_update_nr_hw_queues
if (!dev->nullb)
// still set while gendisk is deleted
return 0
blk_mq_update_nr_hw_queues
dev->nullb = NULL
Fix this problem by resuing the global mutex to protect
nullb_device_power_store() and nullb_update_nr_hw_queues() from configfs.
In the Linux kernel, the following vulnerability has been resolved:
net/sched: taprio: extend minimum interval restriction to entire cycle too
It is possible for syzbot to side-step the restriction imposed by the
blamed commit in the Fixes: tag, because the taprio UAPI permits a
cycle-time different from (and potentially shorter than) the sum of
entry intervals.
We need one more restriction, which is that the cycle time itself must
be larger than N * ETH_ZLEN bit times, where N is the number of schedule
entries. This restriction needs to apply regardless of whether the cycle
time came from the user or was the implicit, auto-calculated value, so
we move the existing "cycle == 0" check outside the "if "(!new->cycle_time)"
branch. This way covers both conditions and scenarios.
Add a selftest which illustrates the issue triggered by syzbot.
In the Linux kernel, the following vulnerability has been resolved:
genirq/cpuhotplug, x86/vector: Prevent vector leak during CPU offline
The absence of IRQD_MOVE_PCNTXT prevents immediate effectiveness of
interrupt affinity reconfiguration via procfs. Instead, the change is
deferred until the next instance of the interrupt being triggered on the
original CPU.
When the interrupt next triggers on the original CPU, the new affinity is
enforced within __irq_move_irq(). A vector is allocated from the new CPU,
but the old vector on the original CPU remains and is not immediately
reclaimed. Instead, apicd->move_in_progress is flagged, and the reclaiming
process is delayed until the next trigger of the interrupt on the new CPU.
Upon the subsequent triggering of the interrupt on the new CPU,
irq_complete_move() adds a task to the old CPU's vector_cleanup list if it
remains online. Subsequently, the timer on the old CPU iterates over its
vector_cleanup list, reclaiming old vectors.
However, a rare scenario arises if the old CPU is outgoing before the
interrupt triggers again on the new CPU.
In that case irq_force_complete_move() is not invoked on the outgoing CPU
to reclaim the old apicd->prev_vector because the interrupt isn't currently
affine to the outgoing CPU, and irq_needs_fixup() returns false. Even
though __vector_schedule_cleanup() is later called on the new CPU, it
doesn't reclaim apicd->prev_vector; instead, it simply resets both
apicd->move_in_progress and apicd->prev_vector to 0.
As a result, the vector remains unreclaimed in vector_matrix, leading to a
CPU vector leak.
To address this issue, move the invocation of irq_force_complete_move()
before the irq_needs_fixup() call to reclaim apicd->prev_vector, if the
interrupt is currently or used to be affine to the outgoing CPU.
Additionally, reclaim the vector in __vector_schedule_cleanup() as well,
following a warning message, although theoretically it should never see
apicd->move_in_progress with apicd->prev_cpu pointing to an offline CPU.
A vulnerability, which was classified as critical, has been found in itsourcecode Vehicle Management System 1.0. Affected by this issue is some unknown functionality of the file busprofile.php. The manipulation of the argument busid leads to sql injection. The attack may be launched remotely. The exploit has been disclosed to the public and may be used. VDB-269282 is the identifier assigned to this vulnerability.
The Kiuwan Local Analyzer (KLA) Java scanning application contains several
hard-coded secrets in plain text format. In some cases, this can
potentially compromise the confidentiality of the scan results.Β Several credentials were found in the JAR files of the Kiuwan Local Analyzer.
The
JAR file "lib.engine/insight/optimyth-insight.jar" contains the file
"InsightServicesConfig.properties", which has the configuration tokens
"insight.github.user" as well as "insight.github.password" prefilled
with credentials. At least the specified username corresponds to a valid
GitHub account.Β The
JAR file "lib.engine/insight/optimyth-insight.jar" also contains the
file "es/als/security/Encryptor.properties", in which the key used for
encrypting the results of any performed scan.
This issue affects Kiuwan SAST: <master.1808.p685.q13371
Kiuwan provides an API endpoint
/saas/rest/v1/info/application
to get information about any
application, providing only its name via the "application" parameter. This endpoint lacks proper access
control mechanisms, allowing other authenticated users to read
information about applications, even though they have not been granted
the necessary rights to do so.
This issue affects Kiuwan SAST: <master.1808.p685.q13371
For Kiuwan installations with SSO (single sign-on) enabled, an
unauthenticated reflected cross-site scripting attack can be performed
on the login page "login.html". This is possible due to the request parameter "message" values
being directly included in a JavaScript block in the response. This is
especially critical in business environments using AD SSO
authentication, e.g. via ADFS, where attackers could potentially steal
AD passwords.
This issue affects Kiuwan SAST: <master.1808.p685.q13371
When the Kiuwan Local Analyzer uploads the scan results to the Kiuwan SAST web
application (either on-premises or cloud/SaaS solution), the transmitted data
consists of a ZIP archive containing several files, some of them in the
XML file format. During Kiuwan's server-side processing of these XML
files, it resolves external XML entities, resulting in a XML external
entity injection attack.Β An attacker with privileges to scan
source code within the "Code Security" module is able to extract any
files of the operating system with the rights of the application server
user and is potentially able to gain sensitive files, such as
configuration and passwords. Furthermore, this vulnerability also allows
an attacker to initiate connections to internal systems, e.g. for port
scans or accessing other internal functions / applications such as the
Wildfly admin console of Kiuwan.
This issue affects Kiuwan SAST: <master.1808.p685.q13371
In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix use-after-free after failure to create a snapshot
At ioctl.c:create_snapshot(), we allocate a pending snapshot structure and
then attach it to the transaction's list of pending snapshots. After that
we call btrfs_commit_transaction(), and if that returns an error we jump
to 'fail' label, where we kfree() the pending snapshot structure. This can
result in a later use-after-free of the pending snapshot:
1) We allocated the pending snapshot and added it to the transaction's
list of pending snapshots;
2) We call btrfs_commit_transaction(), and it fails either at the first
call to btrfs_run_delayed_refs() or btrfs_start_dirty_block_groups().
In both cases, we don't abort the transaction and we release our
transaction handle. We jump to the 'fail' label and free the pending
snapshot structure. We return with the pending snapshot still in the
transaction's list;
3) Another task commits the transaction. This time there's no error at
all, and then during the transaction commit it accesses a pointer
to the pending snapshot structure that the snapshot creation task
has already freed, resulting in a user-after-free.
This issue could actually be detected by smatch, which produced the
following warning:
fs/btrfs/ioctl.c:843 create_snapshot() warn: '&pending_snapshot->list' not removed from list
So fix this by not having the snapshot creation ioctl directly add the
pending snapshot to the transaction's list. Instead add the pending
snapshot to the transaction handle, and then at btrfs_commit_transaction()
we add the snapshot to the list only when we can guarantee that any error
returned after that point will result in a transaction abort, in which
case the ioctl code can safely free the pending snapshot and no one can
access it anymore.
In the Linux kernel, the following vulnerability has been resolved:
KVM: arm64: Avoid consuming a stale esr value when SError occur
When any exception other than an IRQ occurs, the CPU updates the ESR_EL2
register with the exception syndrome. An SError may also become pending,
and will be synchronised by KVM. KVM notes the exception type, and whether
an SError was synchronised in exit_code.
When an exception other than an IRQ occurs, fixup_guest_exit() updates
vcpu->arch.fault.esr_el2 from the hardware register. When an SError was
synchronised, the vcpu esr value is used to determine if the exception
was due to an HVC. If so, ELR_EL2 is moved back one instruction. This
is so that KVM can process the SError first, and re-execute the HVC if
the guest survives the SError.
But if an IRQ synchronises an SError, the vcpu's esr value is stale.
If the previous non-IRQ exception was an HVC, KVM will corrupt ELR_EL2,
causing an unrelated guest instruction to be executed twice.
Check ARM_EXCEPTION_CODE() before messing with ELR_EL2, IRQs don't
update this register so don't need to check.
In the Linux kernel, the following vulnerability has been resolved:
net/smc: Forward wakeup to smc socket waitqueue after fallback
When we replace TCP with SMC and a fallback occurs, there may be
some socket waitqueue entries remaining in smc socket->wq, such
as eppoll_entries inserted by userspace applications.
After the fallback, data flows over TCP/IP and only clcsocket->wq
will be woken up. Applications can't be notified by the entries
which were inserted in smc socket->wq before fallback. So we need
a mechanism to wake up smc socket->wq at the same time if some
entries remaining in it.
The current workaround is to transfer the entries from smc socket->wq
to clcsock->wq during the fallback. But this may cause a crash
like this:
general protection fault, probably for non-canonical address 0xdead000000000100: 0000 [#1] PREEMPT SMP PTI
CPU: 3 PID: 0 Comm: swapper/3 Kdump: loaded Tainted: G E 5.16.0+ #107
RIP: 0010:__wake_up_common+0x65/0x170
Call Trace:
<IRQ>
__wake_up_common_lock+0x7a/0xc0
sock_def_readable+0x3c/0x70
tcp_data_queue+0x4a7/0xc40
tcp_rcv_established+0x32f/0x660
? sk_filter_trim_cap+0xcb/0x2e0
tcp_v4_do_rcv+0x10b/0x260
tcp_v4_rcv+0xd2a/0xde0
ip_protocol_deliver_rcu+0x3b/0x1d0
ip_local_deliver_finish+0x54/0x60
ip_local_deliver+0x6a/0x110
? tcp_v4_early_demux+0xa2/0x140
? tcp_v4_early_demux+0x10d/0x140
ip_sublist_rcv_finish+0x49/0x60
ip_sublist_rcv+0x19d/0x230
ip_list_rcv+0x13e/0x170
__netif_receive_skb_list_core+0x1c2/0x240
netif_receive_skb_list_internal+0x1e6/0x320
napi_complete_done+0x11d/0x190
mlx5e_napi_poll+0x163/0x6b0 [mlx5_core]
__napi_poll+0x3c/0x1b0
net_rx_action+0x27c/0x300
__do_softirq+0x114/0x2d2
irq_exit_rcu+0xb4/0xe0
common_interrupt+0xba/0xe0
</IRQ>
<TASK>
The crash is caused by privately transferring waitqueue entries from
smc socket->wq to clcsock->wq. The owners of these entries, such as
epoll, have no idea that the entries have been transferred to a
different socket wait queue and still use original waitqueue spinlock
(smc socket->wq.wait.lock) to make the entries operation exclusive,
but it doesn't work. The operations to the entries, such as removing
from the waitqueue (now is clcsock->wq after fallback), may cause a
crash when clcsock waitqueue is being iterated over at the moment.
This patch tries to fix this by no longer transferring wait queue
entries privately, but introducing own implementations of clcsock's
callback functions in fallback situation. The callback functions will
forward the wakeup to smc socket->wq if clcsock->wq is actually woken
up and smc socket->wq has remaining entries.
In the Linux kernel, the following vulnerability has been resolved:
net: macsec: Fix offload support for NETDEV_UNREGISTER event
Current macsec netdev notify handler handles NETDEV_UNREGISTER event by
releasing relevant SW resources only, this causes resources leak in case
of macsec HW offload, as the underlay driver was not notified to clean
it's macsec offload resources.
Fix by calling the underlay driver to clean it's relevant resources
by moving offload handling from macsec_dellink() to macsec_common_dellink()
when handling NETDEV_UNREGISTER event.
In the Linux kernel, the following vulnerability has been resolved:
scsi: bnx2fc: Make bnx2fc_recv_frame() mp safe
Running tests with a debug kernel shows that bnx2fc_recv_frame() is
modifying the per_cpu lport stats counters in a non-mpsafe way. Just boot
a debug kernel and run the bnx2fc driver with the hardware enabled.
[ 1391.699147] BUG: using smp_processor_id() in preemptible [00000000] code: bnx2fc_
[ 1391.699160] caller is bnx2fc_recv_frame+0xbf9/0x1760 [bnx2fc]
[ 1391.699174] CPU: 2 PID: 4355 Comm: bnx2fc_l2_threa Kdump: loaded Tainted: G B
[ 1391.699180] Hardware name: HP ProLiant DL120 G7, BIOS J01 07/01/2013
[ 1391.699183] Call Trace:
[ 1391.699188] dump_stack_lvl+0x57/0x7d
[ 1391.699198] check_preemption_disabled+0xc8/0xd0
[ 1391.699205] bnx2fc_recv_frame+0xbf9/0x1760 [bnx2fc]
[ 1391.699215] ? do_raw_spin_trylock+0xb5/0x180
[ 1391.699221] ? bnx2fc_npiv_create_vports.isra.0+0x4e0/0x4e0 [bnx2fc]
[ 1391.699229] ? bnx2fc_l2_rcv_thread+0xb7/0x3a0 [bnx2fc]
[ 1391.699240] bnx2fc_l2_rcv_thread+0x1af/0x3a0 [bnx2fc]
[ 1391.699250] ? bnx2fc_ulp_init+0xc0/0xc0 [bnx2fc]
[ 1391.699258] kthread+0x364/0x420
[ 1391.699263] ? _raw_spin_unlock_irq+0x24/0x50
[ 1391.699268] ? set_kthread_struct+0x100/0x100
[ 1391.699273] ret_from_fork+0x22/0x30
Restore the old get_cpu/put_cpu code with some modifications to reduce the
size of the critical section.
In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: HCI: Remove HCI_AMP support
Since BT_HS has been remove HCI_AMP controllers no longer has any use so
remove it along with the capability of creating AMP controllers.
Since we no longer need to differentiate between AMP and Primary
controllers, as only HCI_PRIMARY is left, this also remove
hdev->dev_type altogether.
In the Linux kernel, the following vulnerability has been resolved:
usb-storage: alauda: Check whether the media is initialized
The member "uzonesize" of struct alauda_info will remain 0
if alauda_init_media() fails, potentially causing divide errors
in alauda_read_data() and alauda_write_lba().
- Add a member "media_initialized" to struct alauda_info.
- Change a condition in alauda_check_media() to ensure the
first initialization.
- Add an error check for the return value of alauda_init_media().
In mintplex-labs/anything-llm versions up to and including 1.5.3, an issue was discovered where the password hash of a user is returned in the response after login (`POST /api/request-token`) and after account creations (`POST /api/admin/users/new`). This exposure occurs because the entire User object, including the bcrypt password hash, is included in the response sent to the frontend. This practice could potentially lead to sensitive information exposure despite the use of bcrypt, a strong hashing algorithm. It is recommended not to expose any clues about passwords to the frontend.
In the Linux kernel, the following vulnerability has been resolved:
ALSA: timer: Set lower bound of start tick time
Currently ALSA timer doesn't have the lower limit of the start tick
time, and it allows a very small size, e.g. 1 tick with 1ns resolution
for hrtimer. Such a situation may lead to an unexpected RCU stall,
where the callback repeatedly queuing the expire update, as reported
by fuzzer.
This patch introduces a sanity check of the timer start tick time, so
that the system returns an error when a too small start size is set.
As of this patch, the lower limit is hard-coded to 100us, which is
small enough but can still work somehow.
In the Linux kernel, the following vulnerability has been resolved:
kunit/fortify: Fix mismatched kvalloc()/vfree() usage
The kv*() family of tests were accidentally freeing with vfree() instead
of kvfree(). Use kvfree() instead.
In the Linux kernel, the following vulnerability has been resolved:
cpufreq: exit() callback is optional
The exit() callback is optional and shouldn't be called without checking
a valid pointer first.
Also, we must clear freq_table pointer even if the exit() callback isn't
present.
In the Linux kernel, the following vulnerability has been resolved:
openrisc: traps: Don't send signals to kernel mode threads
OpenRISC exception handling sends signals to user processes on floating
point exceptions and trap instructions (for debugging) among others.
There is a bug where the trap handling logic may send signals to kernel
threads, we should not send these signals to kernel threads, if that
happens we treat it as an error.
This patch adds conditions to die if the kernel receives these
exceptions in kernel mode code.
In the Linux kernel, the following vulnerability has been resolved:
ipv6: sr: fix invalid unregister error path
The error path of seg6_init() is wrong in case CONFIG_IPV6_SEG6_LWTUNNEL
is not defined. In that case if seg6_hmac_init() fails, the
genl_unregister_family() isn't called.
This issue exist since commit 46738b1317e1 ("ipv6: sr: add option to control
lwtunnel support"), and commit 5559cea2d5aa ("ipv6: sr: fix possible
use-after-free and null-ptr-deref") replaced unregister_pernet_subsys()
with genl_unregister_family() in this error path.
In the Linux kernel, the following vulnerability has been resolved:
media: i2c: et8ek8: Don't strip remove function when driver is builtin
Using __exit for the remove function results in the remove callback
being discarded with CONFIG_VIDEO_ET8EK8=y. When such a device gets
unbound (e.g. using sysfs or hotplug), the driver is just removed
without the cleanup being performed. This results in resource leaks. Fix
it by compiling in the remove callback unconditionally.
This also fixes a W=1 modpost warning:
WARNING: modpost: drivers/media/i2c/et8ek8/et8ek8: section mismatch in reference: et8ek8_i2c_driver+0x10 (section: .data) -> et8ek8_remove (section: .exit.text)
In the Linux kernel, the following vulnerability has been resolved:
macintosh/via-macii: Fix "BUG: sleeping function called from invalid context"
The via-macii ADB driver calls request_irq() after disabling hard
interrupts. But disabling interrupts isn't necessary here because the
VIA shift register interrupt was masked during VIA1 initialization.
In the Linux kernel, the following vulnerability has been resolved:
block: refine the EOF check in blkdev_iomap_begin
blkdev_iomap_begin rounds down the offset to the logical block size
before stashing it in iomap->offset and checking that it still is
inside the inode size.
Check the i_size check to the raw pos value so that we don't try a
zero size write if iter->pos is unaligned.