Merge ~mschiu77/ubuntu/+source/linux-intel:austin-1928665 into ~mschiu77/ubuntu/+source/linux-intel/+git/focal:master-next

Proposed by Chris Chiu
Status: Needs review
Proposed branch: ~mschiu77/ubuntu/+source/linux-intel:austin-1928665
Merge into: ~mschiu77/ubuntu/+source/linux-intel/+git/focal:master-next
Diff against target: 3871 lines (+2530/-379)
20 files modified
debian.intel/config/config.common.ubuntu (+3/-0)
drivers/bus/mhi/core/boot.c (+7/-6)
drivers/bus/mhi/core/debugfs.c (+1/-1)
drivers/bus/mhi/core/init.c (+31/-23)
drivers/bus/mhi/core/internal.h (+18/-1)
drivers/bus/mhi/core/main.c (+232/-204)
drivers/bus/mhi/core/pm.c (+59/-48)
drivers/bus/mhi/pci_generic.c (+644/-33)
drivers/net/Kconfig (+2/-0)
drivers/net/Makefile (+2/-1)
drivers/net/mhi/Makefile (+3/-0)
drivers/net/mhi/mhi.h (+41/-0)
drivers/net/mhi/net.c (+148/-55)
drivers/net/mhi/proto_mbim.c (+303/-0)
drivers/net/wwan/Kconfig (+37/-0)
drivers/net/wwan/Makefile (+9/-0)
drivers/net/wwan/mhi_wwan_ctrl.c (+284/-0)
drivers/net/wwan/wwan_core.c (+554/-0)
include/linux/mhi.h (+41/-7)
include/linux/wwan.h (+111/-0)
Reviewer Review Type Date Requested Status
Chris Chiu Pending
Review via email: mp+404246@code.launchpad.net

Description of the change

Enable WWAN framework and driver support on linux-intel (5.11)

To post a comment you must log in.

Unmerged commits

38fdafa... by Chris Chiu

UBUNTU: [Config] enable configs for WWAN framework and drivers

BugLink: https://bugs.launchpad.net/bugs/1932124

Signed-off-by: Chris Chiu <email address hidden>

5a836d0... by Loic Poulain <email address hidden>

net: wwan: core: Return poll error in case of port removal

BugLink: https://bugs.launchpad.net/bugs/1932124

Ensure that the poll system call returns proper error flags when port
is removed (nullified port ops), allowing user side to properly fail,
without further read or write.

Fixes: 9a44c1cc6388 ("net: Add a WWAN subsystem")
Signed-off-by: Loic Poulain <email address hidden>
Reviewed-by: Leon Romanovsky <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 57e222475545f457ecf4833db31f156e8b7674c7)
Signed-off-by: Chris Chiu <email address hidden>

f8a9747... by Loic Poulain <email address hidden>

net: wwan: mhi_wwan_ctrl: Fix RX buffer starvation

BugLink: https://bugs.launchpad.net/bugs/1932124

The mhi_wwan_rx_budget_dec function is supposed to return true if
RX buffer budget has been successfully decremented, allowing to queue
a new RX buffer for transfer. However the current implementation is
broken when RX budget is '1', in which case budget is decremented but
false is returned, preventing to requeue one buffer, and leading to
RX buffer starvation.

Fixes: fa588eba632d ("net: Add Qcom WWAN control driver")
Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit a926c025d56bb1acd8a192fca0e307331ee91b30)
Signed-off-by: Chris Chiu <email address hidden>

8311c4f... by Loic Poulain <email address hidden>

net: wwan: Fix bit ops double shift

BugLink: https://bugs.launchpad.net/bugs/1932124

bit operation helpers such as test_bit, clear_bit, etc take bit
position as parameter and not value. Current usage causes double
shift => BIT(BIT(0)). Fix that in wwan_core and mhi_wwan_ctrl.

Fixes: 9a44c1cc6388 ("net: Add a WWAN subsystem")
Reported-by: Dan Carpenter <email address hidden>
Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit b8c55ce266dee09b0e359ff9af885eb94e11480a)
Signed-off-by: Chris Chiu <email address hidden>

a4616f1... by Loic Poulain <email address hidden>

net: Add Qcom WWAN control driver

BugLink: https://bugs.launchpad.net/bugs/1932124

The MHI WWWAN control driver allows MHI QCOM-based modems to expose
different modem control protocols/ports via the WWAN framework, so that
userspace modem tools or daemon (e.g. ModemManager) can control WWAN
config and state (APN config, SMS, provider selection...). A QCOM-based
modem can expose one or several of the following protocols:
- AT: Well known AT commands interactive protocol (microcom, minicom...)
- MBIM: Mobile Broadband Interface Model (libmbim, mbimcli)
- QMI: QCOM MSM/Modem Interface (libqmi, qmicli)
- QCDM: QCOM Modem diagnostic interface (libqcdm)
- FIREHOSE: XML-based protocol for Modem firmware management
        (qmi-firmware-update)

Note that this patch is mostly a rework of the earlier MHI UCI
tentative that was a generic interface for accessing MHI bus from
userspace. As suggested, this new version is WWAN specific and is
dedicated to only expose channels used for controlling a modem, and
for which related opensource userpace support exist.

Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit fa588eba632df14d296436995e6bbea0c146ae77)
Signed-off-by: Chris Chiu <email address hidden>

7b952ef... by Loic Poulain <email address hidden>

net: Add a WWAN subsystem

BugLink: https://bugs.launchpad.net/bugs/1932124

This change introduces initial support for a WWAN framework. Given the
complexity and heterogeneity of existing WWAN hardwares and interfaces,
there is no strict definition of what a WWAN device is and how it should
be represented. It's often a collection of multiple devices that perform
the global WWAN feature (netdev, tty, chardev, etc).

One usual way to expose modem controls and configuration is via high
level protocols such as the well known AT command protocol, MBIM or
QMI. The USB modems started to expose them as character devices, and
user daemons such as ModemManager learnt to use them.

This initial version adds the concept of WWAN port, which is a logical
pipe to a modem control protocol. The protocols are rawly exposed to
user via character device, allowing straigthforward support in existing
tools (ModemManager, ofono...). The WWAN core takes care of the generic
part, including character device management, and relies on port driver
operations to receive/submit protocol data.

Since the different devices exposing protocols for a same WWAN hardware
do not necessarily know about each others (e.g. two different USB
interfaces, PCI/MHI channel devices...) and can be created/removed in
different orders, the WWAN core ensures that all WAN ports contributing
to the 'whole' WWAN feature are grouped under the same virtual WWAN
device, relying on the provided parent device (e.g. mhi controller,
USB device). It's a 'trick' I copied from Johannes's earlier WWAN
subsystem proposal.

This initial version is purposely minimalist, it's essentially moving
the generic part of the previously proposed mhi_wwan_ctrl driver inside
a common WWAN framework, but the implementation is open and flexible
enough to allow extension for further drivers.

Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 9a44c1cc63887627284ae232a9626a9f1cd066fc)
Signed-off-by: Chris Chiu <email address hidden>

c9812e2... by Loic Poulain <email address hidden>

net: mhi: Allow decoupled MTU/MRU

BugLink: https://bugs.launchpad.net/bugs/1932124

MBIM protocol makes the mhi network interface asymmetric, ingress data
received from MHI is MBIM protocol, possibly containing multiple
aggregated IP packets, while egress data received from network stack is
IP protocol.

This changes allows a 'protocol' to specify its own MRU, that when
specified is used to allocate MHI RX buffers (skb).

For MBIM, Set the default MTU to 1500, which is the usual network MTU
for WWAN IP packets, and MRU to 3.5K (for allocation efficiency),
allowing skb to fit in an usual 4K page (including padding,
skb_shared_info, ...).

Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 3af562a37b7f1a5dbeb50a00ace280ba2d984762)
Signed-off-by: Chris Chiu <email address hidden>

a1fec54... by Loic Poulain <email address hidden>

net: mhi: Add support for non-linear MBIM skb processing

BugLink: https://bugs.launchpad.net/bugs/1932124

Currently, if skb is non-linear, due to MHI skb chaining, it is
linearized in MBIM RX handler prior MBIM decoding, causing extra
allocation and copy that can be as large as the maximum MBIM frame
size (32K).

This change introduces MBIM decoding for non-linear skb, allowing to
process 'large' non-linear MBIM packets without skb linearization.
The IP packets are simply extracted from the MBIM frame using the
skb_copy_bits helper.

Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit d9f0713c9217fdd31077f890c2e15232ad2f0772)
Signed-off-by: Chris Chiu <email address hidden>

93627b8... by Loic Poulain <email address hidden>

net: mhi: Add mbim proto

BugLink: https://bugs.launchpad.net/bugs/1932124

MBIM has initially been specified by USB-IF for transporting data (IP)
between a modem and a host over USB. However some modern modems also
support MBIM over PCIe (via MHI). In the same way as QMAP(rmnet), it
allows to aggregate IP packets and to perform context multiplexing.

This change adds minimal MBIM data transport support to MHI, allowing
to support MBIM only modems. MBIM being based on USB NCM, it reuses
and copy some helpers/functions from the USB stack (cdc-ncm, cdc-mbim).

Note that is a subset of the CDC-MBIM specification, supporting only
transport of network data (IP), there is no support for DSS. Moreover
the multi-session (for multi-pdn) is not supported in this initial
version, but will be added latter, and aligned with the cdc-mbim
solution (VLAN tags).

This code has been inspired from the mhi_mbim downstream implementation
(Carl Yin <email address hidden>).

Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 163c5e6262ae5d7347801964dbd3d48490490a3d)
Signed-off-by: Chris Chiu <email address hidden>

f3b06a8... by Loic Poulain <email address hidden>

net: mhi: Add rx_length_errors stat

BugLink: https://bugs.launchpad.net/bugs/1932124

This can be used by proto when packet len is incorrect.

Signed-off-by: Loic Poulain <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 84c55f16dcd74af5be525aa9c1878bfaec4e8a7a)
Signed-off-by: Chris Chiu <email address hidden>

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/debian.intel/config/config.common.ubuntu b/debian.intel/config/config.common.ubuntu
2index a2db3b8..29613ac 100644
3--- a/debian.intel/config/config.common.ubuntu
4+++ b/debian.intel/config/config.common.ubuntu
5@@ -4426,6 +4426,7 @@ CONFIG_MFD_WM8994=m
6 CONFIG_MFD_WM8997=y
7 CONFIG_MFD_WM8998=y
8 CONFIG_MHI_BUS=m
9+CONFIG_MHI_WWAN_CTRL=m
10 # CONFIG_MHI_BUS_DEBUG is not set
11 CONFIG_MHI_BUS_PCI_GENERIC=m
12 CONFIG_MHI_NET=m
13@@ -9065,6 +9066,8 @@ CONFIG_WM8350_WATCHDOG=m
14 CONFIG_WMI_BMOF=m
15 CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y
16 # CONFIG_WQ_WATCHDOG is not set
17+CONFIG_WWAN=y
18+CONFIG_WWAN_CORE=m
19 # CONFIG_WW_MUTEX_SELFTEST is not set
20 CONFIG_X25=m
21 CONFIG_X509_CERTIFICATE_PARSER=y
22diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c
23index 08c2874..8100cf5 100644
24--- a/drivers/bus/mhi/core/boot.c
25+++ b/drivers/bus/mhi/core/boot.c
26@@ -416,9 +416,9 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
27 }
28 }
29
30- /* If device is in pass through, do reset to ready state transition */
31- if (mhi_cntrl->ee == MHI_EE_PTHRU)
32- goto fw_load_ee_pthru;
33+ /* wait for ready on pass through or any other execution environment */
34+ if (mhi_cntrl->ee != MHI_EE_EDL && mhi_cntrl->ee != MHI_EE_PBL)
35+ goto fw_load_ready_state;
36
37 fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
38 mhi_cntrl->edl_image : mhi_cntrl->fw_image;
39@@ -460,9 +460,10 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
40 goto error_fw_load;
41 }
42
43- if (mhi_cntrl->ee == MHI_EE_EDL) {
44+ /* Wait for ready since EDL image was loaded */
45+ if (fw_name == mhi_cntrl->edl_image) {
46 release_firmware(firmware);
47- return;
48+ goto fw_load_ready_state;
49 }
50
51 write_lock_irq(&mhi_cntrl->pm_lock);
52@@ -487,7 +488,7 @@ void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
53
54 release_firmware(firmware);
55
56-fw_load_ee_pthru:
57+fw_load_ready_state:
58 /* Transitioning into MHI RESET->READY state */
59 ret = mhi_ready_state_transition(mhi_cntrl);
60 if (ret) {
61diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
62index 7d43138..858d751 100644
63--- a/drivers/bus/mhi/core/debugfs.c
64+++ b/drivers/bus/mhi/core/debugfs.c
65@@ -377,7 +377,7 @@ static struct dentry *mhi_debugfs_root;
66 void mhi_create_debugfs(struct mhi_controller *mhi_cntrl)
67 {
68 mhi_cntrl->debugfs_dentry =
69- debugfs_create_dir(dev_name(mhi_cntrl->cntrl_dev),
70+ debugfs_create_dir(dev_name(&mhi_cntrl->mhi_dev->dev),
71 mhi_debugfs_root);
72
73 debugfs_create_file("states", 0444, mhi_cntrl->debugfs_dentry,
74diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
75index 9ed047f..c81b377 100644
76--- a/drivers/bus/mhi/core/init.c
77+++ b/drivers/bus/mhi/core/init.c
78@@ -22,13 +22,14 @@
79 static DEFINE_IDA(mhi_controller_ida);
80
81 const char * const mhi_ee_str[MHI_EE_MAX] = {
82- [MHI_EE_PBL] = "PBL",
83- [MHI_EE_SBL] = "SBL",
84- [MHI_EE_AMSS] = "AMSS",
85- [MHI_EE_RDDM] = "RDDM",
86- [MHI_EE_WFW] = "WFW",
87- [MHI_EE_PTHRU] = "PASS THRU",
88- [MHI_EE_EDL] = "EDL",
89+ [MHI_EE_PBL] = "PRIMARY BOOTLOADER",
90+ [MHI_EE_SBL] = "SECONDARY BOOTLOADER",
91+ [MHI_EE_AMSS] = "MISSION MODE",
92+ [MHI_EE_RDDM] = "RAMDUMP DOWNLOAD MODE",
93+ [MHI_EE_WFW] = "WLAN FIRMWARE",
94+ [MHI_EE_PTHRU] = "PASS THROUGH",
95+ [MHI_EE_EDL] = "EMERGENCY DOWNLOAD",
96+ [MHI_EE_FP] = "FLASH PROGRAMMER",
97 [MHI_EE_DISABLE_TRANSITION] = "DISABLE",
98 [MHI_EE_NOT_SUPPORTED] = "NOT SUPPORTED",
99 };
100@@ -37,8 +38,9 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
101 [DEV_ST_TRANSITION_PBL] = "PBL",
102 [DEV_ST_TRANSITION_READY] = "READY",
103 [DEV_ST_TRANSITION_SBL] = "SBL",
104- [DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE",
105- [DEV_ST_TRANSITION_SYS_ERR] = "SYS_ERR",
106+ [DEV_ST_TRANSITION_MISSION_MODE] = "MISSION MODE",
107+ [DEV_ST_TRANSITION_FP] = "FLASH PROGRAMMER",
108+ [DEV_ST_TRANSITION_SYS_ERR] = "SYS ERROR",
109 [DEV_ST_TRANSITION_DISABLE] = "DISABLE",
110 };
111
112@@ -49,24 +51,30 @@ const char * const mhi_state_str[MHI_STATE_MAX] = {
113 [MHI_STATE_M1] = "M1",
114 [MHI_STATE_M2] = "M2",
115 [MHI_STATE_M3] = "M3",
116- [MHI_STATE_M3_FAST] = "M3_FAST",
117+ [MHI_STATE_M3_FAST] = "M3 FAST",
118 [MHI_STATE_BHI] = "BHI",
119- [MHI_STATE_SYS_ERR] = "SYS_ERR",
120+ [MHI_STATE_SYS_ERR] = "SYS ERROR",
121+};
122+
123+const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
124+ [MHI_CH_STATE_TYPE_RESET] = "RESET",
125+ [MHI_CH_STATE_TYPE_STOP] = "STOP",
126+ [MHI_CH_STATE_TYPE_START] = "START",
127 };
128
129 static const char * const mhi_pm_state_str[] = {
130 [MHI_PM_STATE_DISABLE] = "DISABLE",
131- [MHI_PM_STATE_POR] = "POR",
132+ [MHI_PM_STATE_POR] = "POWER ON RESET",
133 [MHI_PM_STATE_M0] = "M0",
134 [MHI_PM_STATE_M2] = "M2",
135 [MHI_PM_STATE_M3_ENTER] = "M?->M3",
136 [MHI_PM_STATE_M3] = "M3",
137 [MHI_PM_STATE_M3_EXIT] = "M3->M0",
138- [MHI_PM_STATE_FW_DL_ERR] = "FW DL Error",
139- [MHI_PM_STATE_SYS_ERR_DETECT] = "SYS_ERR Detect",
140- [MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS_ERR Process",
141+ [MHI_PM_STATE_FW_DL_ERR] = "Firmware Download Error",
142+ [MHI_PM_STATE_SYS_ERR_DETECT] = "SYS ERROR Detect",
143+ [MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS ERROR Process",
144 [MHI_PM_STATE_SHUTDOWN_PROCESS] = "SHUTDOWN Process",
145- [MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect",
146+ [MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
147 };
148
149 const char *to_mhi_pm_state_str(enum mhi_pm_state state)
150@@ -151,12 +159,17 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
151 {
152 struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
153 struct device *dev = &mhi_cntrl->mhi_dev->dev;
154+ unsigned long irq_flags = IRQF_SHARED | IRQF_NO_SUSPEND;
155 int i, ret;
156
157+ /* if controller driver has set irq_flags, use it */
158+ if (mhi_cntrl->irq_flags)
159+ irq_flags = mhi_cntrl->irq_flags;
160+
161 /* Setup BHI_INTVEC IRQ */
162 ret = request_threaded_irq(mhi_cntrl->irq[0], mhi_intvec_handler,
163 mhi_intvec_threaded_handler,
164- IRQF_SHARED | IRQF_NO_SUSPEND,
165+ irq_flags,
166 "bhi", mhi_cntrl);
167 if (ret)
168 return ret;
169@@ -174,7 +187,7 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
170
171 ret = request_irq(mhi_cntrl->irq[mhi_event->irq],
172 mhi_irq_handler,
173- IRQF_SHARED | IRQF_NO_SUSPEND,
174+ irq_flags,
175 "mhi", mhi_event);
176 if (ret) {
177 dev_err(dev, "Error requesting irq:%d for ev:%d\n",
178@@ -503,8 +516,6 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
179
180 /* Setup wake db */
181 mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
182- mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
183- mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
184 mhi_cntrl->wake_set = false;
185
186 /* Setup channel db address for each channel in tre_ring */
187@@ -1088,8 +1099,6 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
188 mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
189 }
190
191- mhi_cntrl->pre_init = true;
192-
193 mutex_unlock(&mhi_cntrl->pm_mutex);
194
195 return 0;
196@@ -1120,7 +1129,6 @@ void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
197 }
198
199 mhi_deinit_dev_ctxt(mhi_cntrl);
200- mhi_cntrl->pre_init = false;
201 }
202 EXPORT_SYMBOL_GPL(mhi_unprepare_after_power_down);
203
204diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
205index 6f37439..5b9ea66 100644
206--- a/drivers/bus/mhi/core/internal.h
207+++ b/drivers/bus/mhi/core/internal.h
208@@ -369,6 +369,18 @@ enum mhi_ch_state {
209 MHI_CH_STATE_ERROR = 0x5,
210 };
211
212+enum mhi_ch_state_type {
213+ MHI_CH_STATE_TYPE_RESET,
214+ MHI_CH_STATE_TYPE_STOP,
215+ MHI_CH_STATE_TYPE_START,
216+ MHI_CH_STATE_TYPE_MAX,
217+};
218+
219+extern const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX];
220+#define TO_CH_STATE_TYPE_STR(state) (((state) >= MHI_CH_STATE_TYPE_MAX) ? \
221+ "INVALID_STATE" : \
222+ mhi_ch_state_type_str[(state)])
223+
224 #define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
225 mode != MHI_DB_BRST_ENABLE)
226
227@@ -379,13 +391,15 @@ extern const char * const mhi_ee_str[MHI_EE_MAX];
228 #define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
229 ee == MHI_EE_EDL)
230
231-#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
232+#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW || \
233+ ee == MHI_EE_FP)
234
235 enum dev_st_transition {
236 DEV_ST_TRANSITION_PBL,
237 DEV_ST_TRANSITION_READY,
238 DEV_ST_TRANSITION_SBL,
239 DEV_ST_TRANSITION_MISSION_MODE,
240+ DEV_ST_TRANSITION_FP,
241 DEV_ST_TRANSITION_SYS_ERR,
242 DEV_ST_TRANSITION_DISABLE,
243 DEV_ST_TRANSITION_MAX,
244@@ -644,6 +658,9 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
245 int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
246 void __iomem *base, u32 offset, u32 mask,
247 u32 shift, u32 *out);
248+int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
249+ void __iomem *base, u32 offset, u32 mask,
250+ u32 shift, u32 val, u32 delayus);
251 void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
252 u32 offset, u32 val);
253 void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
254diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
255index da495f6..b0c8afe 100644
256--- a/drivers/bus/mhi/core/main.c
257+++ b/drivers/bus/mhi/core/main.c
258@@ -4,6 +4,7 @@
259 *
260 */
261
262+#include <linux/delay.h>
263 #include <linux/device.h>
264 #include <linux/dma-direction.h>
265 #include <linux/dma-mapping.h>
266@@ -37,6 +38,28 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
267 return 0;
268 }
269
270+int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
271+ void __iomem *base, u32 offset,
272+ u32 mask, u32 shift, u32 val, u32 delayus)
273+{
274+ int ret;
275+ u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
276+
277+ while (retry--) {
278+ ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, shift,
279+ &out);
280+ if (ret)
281+ return ret;
282+
283+ if (out == val)
284+ return 0;
285+
286+ fsleep(delayus);
287+ }
288+
289+ return -ETIMEDOUT;
290+}
291+
292 void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
293 u32 offset, u32 val)
294 {
295@@ -111,7 +134,14 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
296 dma_addr_t db;
297
298 db = ring->iommu_base + (ring->wp - ring->base);
299+
300+ /*
301+ * Writes to the new ring element must be visible to the hardware
302+ * before letting h/w know there is new element to fetch.
303+ */
304+ dma_wmb();
305 *ring->ctxt_wp = db;
306+
307 mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
308 ring->db_addr, db);
309 }
310@@ -135,6 +165,19 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
311 }
312 EXPORT_SYMBOL_GPL(mhi_get_mhi_state);
313
314+void mhi_soc_reset(struct mhi_controller *mhi_cntrl)
315+{
316+ if (mhi_cntrl->reset) {
317+ mhi_cntrl->reset(mhi_cntrl);
318+ return;
319+ }
320+
321+ /* Generic MHI SoC reset */
322+ mhi_write_reg(mhi_cntrl, mhi_cntrl->regs, MHI_SOC_RESET_REQ_OFFSET,
323+ MHI_SOC_RESET_REQ);
324+}
325+EXPORT_SYMBOL_GPL(mhi_soc_reset);
326+
327 int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
328 struct mhi_buf_info *buf_info)
329 {
330@@ -286,6 +329,18 @@ int mhi_destroy_device(struct device *dev, void *data)
331 return 0;
332 }
333
334+int mhi_get_free_desc_count(struct mhi_device *mhi_dev,
335+ enum dma_data_direction dir)
336+{
337+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
338+ struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ?
339+ mhi_dev->ul_chan : mhi_dev->dl_chan;
340+ struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
341+
342+ return get_nr_avail_ring_elements(mhi_cntrl, tre_ring);
343+}
344+EXPORT_SYMBOL_GPL(mhi_get_free_desc_count);
345+
346 void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
347 {
348 struct mhi_driver *mhi_drv;
349@@ -410,9 +465,9 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
350 {
351 struct mhi_controller *mhi_cntrl = priv;
352 struct device *dev = &mhi_cntrl->mhi_dev->dev;
353- enum mhi_state state = MHI_STATE_MAX;
354+ enum mhi_state state;
355 enum mhi_pm_state pm_state = 0;
356- enum mhi_ee_type ee = MHI_EE_MAX;
357+ enum mhi_ee_type ee;
358
359 write_lock_irq(&mhi_cntrl->pm_lock);
360 if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
361@@ -422,9 +477,10 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
362
363 state = mhi_get_mhi_state(mhi_cntrl);
364 ee = mhi_get_exec_env(mhi_cntrl);
365- dev_dbg(dev, "local ee:%s device ee:%s dev_state:%s\n",
366- TO_MHI_EXEC_STR(mhi_cntrl->ee), TO_MHI_EXEC_STR(ee),
367- TO_MHI_STATE_STR(state));
368+ dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
369+ TO_MHI_EXEC_STR(mhi_cntrl->ee),
370+ TO_MHI_STATE_STR(mhi_cntrl->dev_state),
371+ TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
372
373 if (state == MHI_STATE_SYS_ERR) {
374 dev_dbg(dev, "System error detected\n");
375@@ -580,8 +636,11 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
376 /* notify client */
377 mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
378
379- if (mhi_chan->dir == DMA_TO_DEVICE)
380+ if (mhi_chan->dir == DMA_TO_DEVICE) {
381 atomic_dec(&mhi_cntrl->pending_pkts);
382+ /* Release the reference got from mhi_queue() */
383+ mhi_cntrl->runtime_put(mhi_cntrl);
384+ }
385
386 /*
387 * Recycle the buffer if buffer is pre-allocated,
388@@ -830,6 +889,9 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
389 case MHI_EE_AMSS:
390 st = DEV_ST_TRANSITION_MISSION_MODE;
391 break;
392+ case MHI_EE_FP:
393+ st = DEV_ST_TRANSITION_FP;
394+ break;
395 case MHI_EE_RDDM:
396 mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
397 write_lock_irq(&mhi_cntrl->pm_lock);
398@@ -1028,118 +1090,89 @@ static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl,
399 return (tmp == ring->rp);
400 }
401
402-int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
403- struct sk_buff *skb, size_t len, enum mhi_flags mflags)
404+static int mhi_queue(struct mhi_device *mhi_dev, struct mhi_buf_info *buf_info,
405+ enum dma_data_direction dir, enum mhi_flags mflags)
406 {
407 struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
408 struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
409 mhi_dev->dl_chan;
410 struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
411- struct mhi_buf_info buf_info = { };
412+ unsigned long flags;
413 int ret;
414
415- /* If MHI host pre-allocates buffers then client drivers cannot queue */
416- if (mhi_chan->pre_alloc)
417- return -EINVAL;
418+ if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
419+ return -EIO;
420
421- if (mhi_is_ring_full(mhi_cntrl, tre_ring))
422- return -ENOMEM;
423+ read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
424
425- read_lock_bh(&mhi_cntrl->pm_lock);
426- if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
427- read_unlock_bh(&mhi_cntrl->pm_lock);
428- return -EIO;
429+ ret = mhi_is_ring_full(mhi_cntrl, tre_ring);
430+ if (unlikely(ret)) {
431+ ret = -EAGAIN;
432+ goto exit_unlock;
433 }
434
435- /* we're in M3 or transitioning to M3 */
436- if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
437- mhi_trigger_resume(mhi_cntrl);
438-
439- /* Toggle wake to exit out of M2 */
440- mhi_cntrl->wake_toggle(mhi_cntrl);
441+ ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf_info, mflags);
442+ if (unlikely(ret))
443+ goto exit_unlock;
444
445- buf_info.v_addr = skb->data;
446- buf_info.cb_buf = skb;
447- buf_info.len = len;
448+ /* Packet is queued, take a usage ref to exit M3 if necessary
449+ * for host->device buffer, balanced put is done on buffer completion
450+ * for device->host buffer, balanced put is after ringing the DB
451+ */
452+ mhi_cntrl->runtime_get(mhi_cntrl);
453
454- ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
455- if (unlikely(ret)) {
456- read_unlock_bh(&mhi_cntrl->pm_lock);
457- return ret;
458- }
459+ /* Assert dev_wake (to exit/prevent M1/M2)*/
460+ mhi_cntrl->wake_toggle(mhi_cntrl);
461
462 if (mhi_chan->dir == DMA_TO_DEVICE)
463 atomic_inc(&mhi_cntrl->pending_pkts);
464
465- if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
466- read_lock_bh(&mhi_chan->lock);
467+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
468 mhi_ring_chan_db(mhi_cntrl, mhi_chan);
469- read_unlock_bh(&mhi_chan->lock);
470- }
471
472- read_unlock_bh(&mhi_cntrl->pm_lock);
473+ if (dir == DMA_FROM_DEVICE)
474+ mhi_cntrl->runtime_put(mhi_cntrl);
475
476- return 0;
477+exit_unlock:
478+ read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
479+
480+ return ret;
481 }
482-EXPORT_SYMBOL_GPL(mhi_queue_skb);
483
484-int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
485- struct mhi_buf *mhi_buf, size_t len, enum mhi_flags mflags)
486+int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
487+ struct sk_buff *skb, size_t len, enum mhi_flags mflags)
488 {
489- struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
490 struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
491 mhi_dev->dl_chan;
492- struct device *dev = &mhi_cntrl->mhi_dev->dev;
493- struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
494 struct mhi_buf_info buf_info = { };
495- int ret;
496-
497- /* If MHI host pre-allocates buffers then client drivers cannot queue */
498- if (mhi_chan->pre_alloc)
499- return -EINVAL;
500
501- if (mhi_is_ring_full(mhi_cntrl, tre_ring))
502- return -ENOMEM;
503-
504- read_lock_bh(&mhi_cntrl->pm_lock);
505- if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
506- dev_err(dev, "MHI is not in activate state, PM state: %s\n",
507- to_mhi_pm_state_str(mhi_cntrl->pm_state));
508- read_unlock_bh(&mhi_cntrl->pm_lock);
509+ buf_info.v_addr = skb->data;
510+ buf_info.cb_buf = skb;
511+ buf_info.len = len;
512
513- return -EIO;
514- }
515+ if (unlikely(mhi_chan->pre_alloc))
516+ return -EINVAL;
517
518- /* we're in M3 or transitioning to M3 */
519- if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
520- mhi_trigger_resume(mhi_cntrl);
521+ return mhi_queue(mhi_dev, &buf_info, dir, mflags);
522+}
523+EXPORT_SYMBOL_GPL(mhi_queue_skb);
524
525- /* Toggle wake to exit out of M2 */
526- mhi_cntrl->wake_toggle(mhi_cntrl);
527+int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
528+ struct mhi_buf *mhi_buf, size_t len, enum mhi_flags mflags)
529+{
530+ struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
531+ mhi_dev->dl_chan;
532+ struct mhi_buf_info buf_info = { };
533
534 buf_info.p_addr = mhi_buf->dma_addr;
535 buf_info.cb_buf = mhi_buf;
536 buf_info.pre_mapped = true;
537 buf_info.len = len;
538
539- ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
540- if (unlikely(ret)) {
541- read_unlock_bh(&mhi_cntrl->pm_lock);
542- return ret;
543- }
544-
545- if (mhi_chan->dir == DMA_TO_DEVICE)
546- atomic_inc(&mhi_cntrl->pending_pkts);
547-
548- if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
549- read_lock_bh(&mhi_chan->lock);
550- mhi_ring_chan_db(mhi_cntrl, mhi_chan);
551- read_unlock_bh(&mhi_chan->lock);
552- }
553-
554- read_unlock_bh(&mhi_cntrl->pm_lock);
555+ if (unlikely(mhi_chan->pre_alloc))
556+ return -EINVAL;
557
558- return 0;
559+ return mhi_queue(mhi_dev, &buf_info, dir, mflags);
560 }
561 EXPORT_SYMBOL_GPL(mhi_queue_dma);
562
563@@ -1193,57 +1226,13 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
564 int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
565 void *buf, size_t len, enum mhi_flags mflags)
566 {
567- struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
568- struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
569- mhi_dev->dl_chan;
570- struct mhi_ring *tre_ring;
571 struct mhi_buf_info buf_info = { };
572- unsigned long flags;
573- int ret;
574-
575- /*
576- * this check here only as a guard, it's always
577- * possible mhi can enter error while executing rest of function,
578- * which is not fatal so we do not need to hold pm_lock
579- */
580- if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
581- return -EIO;
582-
583- tre_ring = &mhi_chan->tre_ring;
584- if (mhi_is_ring_full(mhi_cntrl, tre_ring))
585- return -ENOMEM;
586
587 buf_info.v_addr = buf;
588 buf_info.cb_buf = buf;
589 buf_info.len = len;
590
591- ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
592- if (unlikely(ret))
593- return ret;
594-
595- read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
596-
597- /* we're in M3 or transitioning to M3 */
598- if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state))
599- mhi_trigger_resume(mhi_cntrl);
600-
601- /* Toggle wake to exit out of M2 */
602- mhi_cntrl->wake_toggle(mhi_cntrl);
603-
604- if (mhi_chan->dir == DMA_TO_DEVICE)
605- atomic_inc(&mhi_cntrl->pending_pkts);
606-
607- if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
608- unsigned long flags;
609-
610- read_lock_irqsave(&mhi_chan->lock, flags);
611- mhi_ring_chan_db(mhi_cntrl, mhi_chan);
612- read_unlock_irqrestore(&mhi_chan->lock, flags);
613- }
614-
615- read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
616-
617- return 0;
618+ return mhi_queue(mhi_dev, &buf_info, dir, mflags);
619 }
620 EXPORT_SYMBOL_GPL(mhi_queue_buf);
621
622@@ -1285,6 +1274,11 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
623 cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
624 cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
625 break;
626+ case MHI_CMD_STOP_CHAN:
627+ cmd_tre->ptr = MHI_TRE_CMD_STOP_PTR;
628+ cmd_tre->dword[0] = MHI_TRE_CMD_STOP_DWORD0;
629+ cmd_tre->dword[1] = MHI_TRE_CMD_STOP_DWORD1(chan);
630+ break;
631 case MHI_CMD_START_CHAN:
632 cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
633 cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
634@@ -1306,56 +1300,125 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
635 return 0;
636 }
637
638-static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
639- struct mhi_chan *mhi_chan)
640+static int mhi_update_channel_state(struct mhi_controller *mhi_cntrl,
641+ struct mhi_chan *mhi_chan,
642+ enum mhi_ch_state_type to_state)
643 {
644+ struct device *dev = &mhi_chan->mhi_dev->dev;
645+ enum mhi_cmd_type cmd = MHI_CMD_NOP;
646 int ret;
647- struct device *dev = &mhi_cntrl->mhi_dev->dev;
648
649- dev_dbg(dev, "Entered: unprepare channel:%d\n", mhi_chan->chan);
650-
651- /* no more processing events for this channel */
652- mutex_lock(&mhi_chan->mutex);
653- write_lock_irq(&mhi_chan->lock);
654- if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED &&
655- mhi_chan->ch_state != MHI_CH_STATE_SUSPENDED) {
656+ dev_dbg(dev, "%d: Updating channel state to: %s\n", mhi_chan->chan,
657+ TO_CH_STATE_TYPE_STR(to_state));
658+
659+ switch (to_state) {
660+ case MHI_CH_STATE_TYPE_RESET:
661+ write_lock_irq(&mhi_chan->lock);
662+ if (mhi_chan->ch_state != MHI_CH_STATE_STOP &&
663+ mhi_chan->ch_state != MHI_CH_STATE_ENABLED &&
664+ mhi_chan->ch_state != MHI_CH_STATE_SUSPENDED) {
665+ write_unlock_irq(&mhi_chan->lock);
666+ return -EINVAL;
667+ }
668+ mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
669 write_unlock_irq(&mhi_chan->lock);
670- mutex_unlock(&mhi_chan->mutex);
671- return;
672+
673+ cmd = MHI_CMD_RESET_CHAN;
674+ break;
675+ case MHI_CH_STATE_TYPE_STOP:
676+ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
677+ return -EINVAL;
678+
679+ cmd = MHI_CMD_STOP_CHAN;
680+ break;
681+ case MHI_CH_STATE_TYPE_START:
682+ if (mhi_chan->ch_state != MHI_CH_STATE_STOP &&
683+ mhi_chan->ch_state != MHI_CH_STATE_DISABLED)
684+ return -EINVAL;
685+
686+ cmd = MHI_CMD_START_CHAN;
687+ break;
688+ default:
689+ dev_err(dev, "%d: Channel state update to %s not allowed\n",
690+ mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
691+ return -EINVAL;
692 }
693
694- mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
695- write_unlock_irq(&mhi_chan->lock);
696+ /* bring host and device out of suspended states */
697+ ret = mhi_device_get_sync(mhi_cntrl->mhi_dev);
698+ if (ret)
699+ return ret;
700+ mhi_cntrl->runtime_get(mhi_cntrl);
701
702 reinit_completion(&mhi_chan->completion);
703- read_lock_bh(&mhi_cntrl->pm_lock);
704- if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
705- read_unlock_bh(&mhi_cntrl->pm_lock);
706- goto error_invalid_state;
707+ ret = mhi_send_cmd(mhi_cntrl, mhi_chan, cmd);
708+ if (ret) {
709+ dev_err(dev, "%d: Failed to send %s channel command\n",
710+ mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
711+ goto exit_channel_update;
712 }
713
714- mhi_cntrl->wake_toggle(mhi_cntrl);
715- read_unlock_bh(&mhi_cntrl->pm_lock);
716+ ret = wait_for_completion_timeout(&mhi_chan->completion,
717+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
718+ if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
719+ dev_err(dev,
720+ "%d: Failed to receive %s channel command completion\n",
721+ mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
722+ ret = -EIO;
723+ goto exit_channel_update;
724+ }
725
726- mhi_cntrl->runtime_get(mhi_cntrl);
727+ ret = 0;
728+
729+ if (to_state != MHI_CH_STATE_TYPE_RESET) {
730+ write_lock_irq(&mhi_chan->lock);
731+ mhi_chan->ch_state = (to_state == MHI_CH_STATE_TYPE_START) ?
732+ MHI_CH_STATE_ENABLED : MHI_CH_STATE_STOP;
733+ write_unlock_irq(&mhi_chan->lock);
734+ }
735+
736+ dev_dbg(dev, "%d: Channel state change to %s successful\n",
737+ mhi_chan->chan, TO_CH_STATE_TYPE_STR(to_state));
738+
739+exit_channel_update:
740 mhi_cntrl->runtime_put(mhi_cntrl);
741- ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
742+ mhi_device_put(mhi_cntrl->mhi_dev);
743+
744+ return ret;
745+}
746+
747+static void mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
748+ struct mhi_chan *mhi_chan)
749+{
750+ int ret;
751+ struct device *dev = &mhi_chan->mhi_dev->dev;
752+
753+ mutex_lock(&mhi_chan->mutex);
754+
755+ if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
756+ dev_dbg(dev, "Current EE: %s Required EE Mask: 0x%x\n",
757+ TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask);
758+ goto exit_unprepare_channel;
759+ }
760+
761+ /* no more processing events for this channel */
762+ ret = mhi_update_channel_state(mhi_cntrl, mhi_chan,
763+ MHI_CH_STATE_TYPE_RESET);
764 if (ret)
765- goto error_invalid_state;
766+ dev_err(dev, "%d: Failed to reset channel, still resetting\n",
767+ mhi_chan->chan);
768
769- /* even if it fails we will still reset */
770- ret = wait_for_completion_timeout(&mhi_chan->completion,
771- msecs_to_jiffies(mhi_cntrl->timeout_ms));
772- if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
773- dev_err(dev,
774- "Failed to receive cmd completion, still resetting\n");
775+exit_unprepare_channel:
776+ write_lock_irq(&mhi_chan->lock);
777+ mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
778+ write_unlock_irq(&mhi_chan->lock);
779
780-error_invalid_state:
781 if (!mhi_chan->offload_ch) {
782 mhi_reset_chan(mhi_cntrl, mhi_chan);
783 mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
784 }
785- dev_dbg(dev, "chan:%d successfully resetted\n", mhi_chan->chan);
786+ dev_dbg(dev, "%d: successfully reset\n", mhi_chan->chan);
787+
788 mutex_unlock(&mhi_chan->mutex);
789 }
790
791@@ -1363,28 +1426,16 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
792 struct mhi_chan *mhi_chan)
793 {
794 int ret = 0;
795- struct device *dev = &mhi_cntrl->mhi_dev->dev;
796-
797- dev_dbg(dev, "Preparing channel: %d\n", mhi_chan->chan);
798+ struct device *dev = &mhi_chan->mhi_dev->dev;
799
800 if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
801- dev_err(dev,
802- "Current EE: %s Required EE Mask: 0x%x for chan: %s\n",
803- TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask,
804- mhi_chan->name);
805+ dev_err(dev, "Current EE: %s Required EE Mask: 0x%x\n",
806+ TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask);
807 return -ENOTCONN;
808 }
809
810 mutex_lock(&mhi_chan->mutex);
811
812- /* If channel is not in disable state, do not allow it to start */
813- if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) {
814- ret = -EIO;
815- dev_dbg(dev, "channel: %d is not in disabled state\n",
816- mhi_chan->chan);
817- goto error_init_chan;
818- }
819-
820 /* Check of client manages channel context for offload channels */
821 if (!mhi_chan->offload_ch) {
822 ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
823@@ -1392,34 +1443,11 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
824 goto error_init_chan;
825 }
826
827- reinit_completion(&mhi_chan->completion);
828- read_lock_bh(&mhi_cntrl->pm_lock);
829- if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
830- read_unlock_bh(&mhi_cntrl->pm_lock);
831- ret = -EIO;
832- goto error_pm_state;
833- }
834-
835- mhi_cntrl->wake_toggle(mhi_cntrl);
836- read_unlock_bh(&mhi_cntrl->pm_lock);
837- mhi_cntrl->runtime_get(mhi_cntrl);
838- mhi_cntrl->runtime_put(mhi_cntrl);
839-
840- ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
841+ ret = mhi_update_channel_state(mhi_cntrl, mhi_chan,
842+ MHI_CH_STATE_TYPE_START);
843 if (ret)
844 goto error_pm_state;
845
846- ret = wait_for_completion_timeout(&mhi_chan->completion,
847- msecs_to_jiffies(mhi_cntrl->timeout_ms));
848- if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
849- ret = -EIO;
850- goto error_pm_state;
851- }
852-
853- write_lock_irq(&mhi_chan->lock);
854- mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
855- write_unlock_irq(&mhi_chan->lock);
856-
857 /* Pre-allocate buffer for xfer ring */
858 if (mhi_chan->pre_alloc) {
859 int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
860@@ -1457,9 +1485,6 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
861
862 mutex_unlock(&mhi_chan->mutex);
863
864- dev_dbg(dev, "Chan: %d successfully moved to start state\n",
865- mhi_chan->chan);
866-
867 return 0;
868
869 error_pm_state:
870@@ -1473,7 +1498,7 @@ error_init_chan:
871
872 error_pre_alloc:
873 mutex_unlock(&mhi_chan->mutex);
874- __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
875+ mhi_unprepare_channel(mhi_cntrl, mhi_chan);
876
877 return ret;
878 }
879@@ -1535,8 +1560,11 @@ static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl,
880 while (tre_ring->rp != tre_ring->wp) {
881 struct mhi_buf_info *buf_info = buf_ring->rp;
882
883- if (mhi_chan->dir == DMA_TO_DEVICE)
884+ if (mhi_chan->dir == DMA_TO_DEVICE) {
885 atomic_dec(&mhi_cntrl->pending_pkts);
886+ /* Release the reference got from mhi_queue() */
887+ mhi_cntrl->runtime_put(mhi_cntrl);
888+ }
889
890 if (!buf_info->pre_mapped)
891 mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
892@@ -1599,7 +1627,7 @@ error_open_chan:
893 if (!mhi_chan)
894 continue;
895
896- __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
897+ mhi_unprepare_channel(mhi_cntrl, mhi_chan);
898 }
899
900 return ret;
901@@ -1617,7 +1645,7 @@ void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
902 if (!mhi_chan)
903 continue;
904
905- __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
906+ mhi_unprepare_channel(mhi_cntrl, mhi_chan);
907 }
908 }
909 EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer);
910diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
911index 87d3b73..e2e59a3 100644
912--- a/drivers/bus/mhi/core/pm.c
913+++ b/drivers/bus/mhi/core/pm.c
914@@ -153,35 +153,33 @@ static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl)
915 /* Handle device ready state transition */
916 int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
917 {
918- void __iomem *base = mhi_cntrl->regs;
919 struct mhi_event *mhi_event;
920 enum mhi_pm_state cur_state;
921 struct device *dev = &mhi_cntrl->mhi_dev->dev;
922- u32 reset = 1, ready = 0;
923+ u32 interval_us = 25000; /* poll register field every 25 milliseconds */
924 int ret, i;
925
926- /* Wait for RESET to be cleared and READY bit to be set by the device */
927- wait_event_timeout(mhi_cntrl->state_event,
928- MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
929- mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
930- MHICTRL_RESET_MASK,
931- MHICTRL_RESET_SHIFT, &reset) ||
932- mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
933- MHISTATUS_READY_MASK,
934- MHISTATUS_READY_SHIFT, &ready) ||
935- (!reset && ready),
936- msecs_to_jiffies(mhi_cntrl->timeout_ms));
937-
938 /* Check if device entered error state */
939 if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
940 dev_err(dev, "Device link is not accessible\n");
941 return -EIO;
942 }
943
944- /* Timeout if device did not transition to ready state */
945- if (reset || !ready) {
946- dev_err(dev, "Device Ready timeout\n");
947- return -ETIMEDOUT;
948+ /* Wait for RESET to be cleared and READY bit to be set by the device */
949+ ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
950+ MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
951+ interval_us);
952+ if (ret) {
953+ dev_err(dev, "Device failed to clear MHI Reset\n");
954+ return ret;
955+ }
956+
957+ ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
958+ MHISTATUS_READY_MASK, MHISTATUS_READY_SHIFT, 1,
959+ interval_us);
960+ if (ret) {
961+ dev_err(dev, "Device failed to enter MHI Ready\n");
962+ return ret;
963 }
964
965 dev_dbg(dev, "Device in READY State\n");
966@@ -564,6 +562,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
967 static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
968 {
969 enum mhi_pm_state cur_state, prev_state;
970+ enum dev_st_transition next_state;
971 struct mhi_event *mhi_event;
972 struct mhi_cmd_ctxt *cmd_ctxt;
973 struct mhi_cmd *mhi_cmd;
974@@ -677,7 +676,23 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
975 er_ctxt->wp = er_ctxt->rbase;
976 }
977
978- mhi_ready_state_transition(mhi_cntrl);
979+ /* Transition to next state */
980+ if (MHI_IN_PBL(mhi_get_exec_env(mhi_cntrl))) {
981+ write_lock_irq(&mhi_cntrl->pm_lock);
982+ cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
983+ write_unlock_irq(&mhi_cntrl->pm_lock);
984+ if (cur_state != MHI_PM_POR) {
985+ dev_err(dev, "Error moving to state %s from %s\n",
986+ to_mhi_pm_state_str(MHI_PM_POR),
987+ to_mhi_pm_state_str(cur_state));
988+ goto exit_sys_error_transition;
989+ }
990+ next_state = DEV_ST_TRANSITION_PBL;
991+ } else {
992+ next_state = DEV_ST_TRANSITION_READY;
993+ }
994+
995+ mhi_queue_state_transition(mhi_cntrl, next_state);
996
997 exit_sys_error_transition:
998 dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
999@@ -746,8 +761,7 @@ void mhi_pm_st_worker(struct work_struct *work)
1000 if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
1001 mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
1002 write_unlock_irq(&mhi_cntrl->pm_lock);
1003- if (MHI_IN_PBL(mhi_cntrl->ee))
1004- mhi_fw_load_handler(mhi_cntrl);
1005+ mhi_fw_load_handler(mhi_cntrl);
1006 break;
1007 case DEV_ST_TRANSITION_SBL:
1008 write_lock_irq(&mhi_cntrl->pm_lock);
1009@@ -765,6 +779,12 @@ void mhi_pm_st_worker(struct work_struct *work)
1010 case DEV_ST_TRANSITION_MISSION_MODE:
1011 mhi_pm_mission_mode_transition(mhi_cntrl);
1012 break;
1013+ case DEV_ST_TRANSITION_FP:
1014+ write_lock_irq(&mhi_cntrl->pm_lock);
1015+ mhi_cntrl->ee = MHI_EE_FP;
1016+ write_unlock_irq(&mhi_cntrl->pm_lock);
1017+ mhi_create_devices(mhi_cntrl);
1018+ break;
1019 case DEV_ST_TRANSITION_READY:
1020 mhi_ready_state_transition(mhi_cntrl);
1021 break;
1022@@ -828,7 +848,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
1023 return -EBUSY;
1024 }
1025
1026- dev_info(dev, "Allowing M3 transition\n");
1027+ dev_dbg(dev, "Allowing M3 transition\n");
1028 new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
1029 if (new_state != MHI_PM_M3_ENTER) {
1030 write_unlock_irq(&mhi_cntrl->pm_lock);
1031@@ -842,7 +862,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
1032 /* Set MHI to M3 and wait for completion */
1033 mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
1034 write_unlock_irq(&mhi_cntrl->pm_lock);
1035- dev_info(dev, "Wait for M3 completion\n");
1036+ dev_dbg(dev, "Waiting for M3 completion\n");
1037
1038 ret = wait_event_timeout(mhi_cntrl->state_event,
1039 mhi_cntrl->dev_state == MHI_STATE_M3 ||
1040@@ -876,9 +896,9 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
1041 enum mhi_pm_state cur_state;
1042 int ret;
1043
1044- dev_info(dev, "Entered with PM state: %s, MHI state: %s\n",
1045- to_mhi_pm_state_str(mhi_cntrl->pm_state),
1046- TO_MHI_STATE_STR(mhi_cntrl->dev_state));
1047+ dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
1048+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
1049+ TO_MHI_STATE_STR(mhi_cntrl->dev_state));
1050
1051 if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
1052 return 0;
1053@@ -886,6 +906,9 @@ int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
1054 if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
1055 return -EIO;
1056
1057+ if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3)
1058+ return -EINVAL;
1059+
1060 /* Notify clients about exiting LPM */
1061 list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
1062 mutex_lock(&itr->mutex);
1063@@ -1039,13 +1062,6 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
1064 mutex_lock(&mhi_cntrl->pm_mutex);
1065 mhi_cntrl->pm_state = MHI_PM_DISABLE;
1066
1067- if (!mhi_cntrl->pre_init) {
1068- /* Setup device context */
1069- ret = mhi_init_dev_ctxt(mhi_cntrl);
1070- if (ret)
1071- goto error_dev_ctxt;
1072- }
1073-
1074 ret = mhi_init_irq_setup(mhi_cntrl);
1075 if (ret)
1076 goto error_setup_irq;
1077@@ -1127,10 +1143,7 @@ error_bhi_offset:
1078 mhi_deinit_free_irq(mhi_cntrl);
1079
1080 error_setup_irq:
1081- if (!mhi_cntrl->pre_init)
1082- mhi_deinit_dev_ctxt(mhi_cntrl);
1083-
1084-error_dev_ctxt:
1085+ mhi_cntrl->pm_state = MHI_PM_DISABLE;
1086 mutex_unlock(&mhi_cntrl->pm_mutex);
1087
1088 return ret;
1089@@ -1142,12 +1155,19 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
1090 enum mhi_pm_state cur_state, transition_state;
1091 struct device *dev = &mhi_cntrl->mhi_dev->dev;
1092
1093+ mutex_lock(&mhi_cntrl->pm_mutex);
1094+ write_lock_irq(&mhi_cntrl->pm_lock);
1095+ cur_state = mhi_cntrl->pm_state;
1096+ if (cur_state == MHI_PM_DISABLE) {
1097+ write_unlock_irq(&mhi_cntrl->pm_lock);
1098+ mutex_unlock(&mhi_cntrl->pm_mutex);
1099+ return; /* Already powered down */
1100+ }
1101+
1102 /* If it's not a graceful shutdown, force MHI to linkdown state */
1103 transition_state = (graceful) ? MHI_PM_SHUTDOWN_PROCESS :
1104 MHI_PM_LD_ERR_FATAL_DETECT;
1105
1106- mutex_lock(&mhi_cntrl->pm_mutex);
1107- write_lock_irq(&mhi_cntrl->pm_lock);
1108 cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
1109 if (cur_state != transition_state) {
1110 dev_err(dev, "Failed to move to state: %s from: %s\n",
1111@@ -1172,15 +1192,6 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
1112 flush_work(&mhi_cntrl->st_worker);
1113
1114 free_irq(mhi_cntrl->irq[0], mhi_cntrl);
1115-
1116- if (!mhi_cntrl->pre_init) {
1117- /* Free all allocated resources */
1118- if (mhi_cntrl->fbc_image) {
1119- mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
1120- mhi_cntrl->fbc_image = NULL;
1121- }
1122- mhi_deinit_dev_ctxt(mhi_cntrl);
1123- }
1124 }
1125 EXPORT_SYMBOL_GPL(mhi_power_down);
1126
1127diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/pci_generic.c
1128index f5bee76..8f71551 100644
1129--- a/drivers/bus/mhi/pci_generic.c
1130+++ b/drivers/bus/mhi/pci_generic.c
1131@@ -8,13 +8,22 @@
1132 * Copyright (C) 2020 Linaro Ltd <loic.poulain@linaro.org>
1133 */
1134
1135+#include <linux/aer.h>
1136+#include <linux/delay.h>
1137 #include <linux/device.h>
1138 #include <linux/mhi.h>
1139 #include <linux/module.h>
1140 #include <linux/pci.h>
1141+#include <linux/pm_runtime.h>
1142+#include <linux/timer.h>
1143+#include <linux/workqueue.h>
1144
1145 #define MHI_PCI_DEFAULT_BAR_NUM 0
1146
1147+#define MHI_POST_RESET_DELAY_MS 500
1148+
1149+#define HEALTH_CHECK_PERIOD (HZ * 2)
1150+
1151 /**
1152 * struct mhi_pci_dev_info - MHI PCI device specific information
1153 * @config: MHI controller configuration
1154@@ -63,9 +72,9 @@ struct mhi_pci_dev_info {
1155 .doorbell_mode_switch = false, \
1156 }
1157
1158-#define MHI_EVENT_CONFIG_CTRL(ev_ring) \
1159+#define MHI_EVENT_CONFIG_CTRL(ev_ring, el_count) \
1160 { \
1161- .num_elements = 64, \
1162+ .num_elements = el_count, \
1163 .irq_moderation_ms = 0, \
1164 .irq = (ev_ring) + 1, \
1165 .priority = 1, \
1166@@ -76,9 +85,99 @@ struct mhi_pci_dev_info {
1167 .offload_channel = false, \
1168 }
1169
1170-#define MHI_EVENT_CONFIG_DATA(ev_ring) \
1171+#define MHI_CHANNEL_CONFIG_HW_UL(ch_num, ch_name, el_count, ev_ring) \
1172+ { \
1173+ .num = ch_num, \
1174+ .name = ch_name, \
1175+ .num_elements = el_count, \
1176+ .event_ring = ev_ring, \
1177+ .dir = DMA_TO_DEVICE, \
1178+ .ee_mask = BIT(MHI_EE_AMSS), \
1179+ .pollcfg = 0, \
1180+ .doorbell = MHI_DB_BRST_ENABLE, \
1181+ .lpm_notify = false, \
1182+ .offload_channel = false, \
1183+ .doorbell_mode_switch = true, \
1184+ } \
1185+
1186+#define MHI_CHANNEL_CONFIG_HW_DL(ch_num, ch_name, el_count, ev_ring) \
1187+ { \
1188+ .num = ch_num, \
1189+ .name = ch_name, \
1190+ .num_elements = el_count, \
1191+ .event_ring = ev_ring, \
1192+ .dir = DMA_FROM_DEVICE, \
1193+ .ee_mask = BIT(MHI_EE_AMSS), \
1194+ .pollcfg = 0, \
1195+ .doorbell = MHI_DB_BRST_ENABLE, \
1196+ .lpm_notify = false, \
1197+ .offload_channel = false, \
1198+ .doorbell_mode_switch = true, \
1199+ }
1200+
1201+#define MHI_CHANNEL_CONFIG_UL_SBL(ch_num, ch_name, el_count, ev_ring) \
1202+ { \
1203+ .num = ch_num, \
1204+ .name = ch_name, \
1205+ .num_elements = el_count, \
1206+ .event_ring = ev_ring, \
1207+ .dir = DMA_TO_DEVICE, \
1208+ .ee_mask = BIT(MHI_EE_SBL), \
1209+ .pollcfg = 0, \
1210+ .doorbell = MHI_DB_BRST_DISABLE, \
1211+ .lpm_notify = false, \
1212+ .offload_channel = false, \
1213+ .doorbell_mode_switch = false, \
1214+ } \
1215+
1216+#define MHI_CHANNEL_CONFIG_DL_SBL(ch_num, ch_name, el_count, ev_ring) \
1217+ { \
1218+ .num = ch_num, \
1219+ .name = ch_name, \
1220+ .num_elements = el_count, \
1221+ .event_ring = ev_ring, \
1222+ .dir = DMA_FROM_DEVICE, \
1223+ .ee_mask = BIT(MHI_EE_SBL), \
1224+ .pollcfg = 0, \
1225+ .doorbell = MHI_DB_BRST_DISABLE, \
1226+ .lpm_notify = false, \
1227+ .offload_channel = false, \
1228+ .doorbell_mode_switch = false, \
1229+ }
1230+
1231+#define MHI_CHANNEL_CONFIG_UL_FP(ch_num, ch_name, el_count, ev_ring) \
1232+ { \
1233+ .num = ch_num, \
1234+ .name = ch_name, \
1235+ .num_elements = el_count, \
1236+ .event_ring = ev_ring, \
1237+ .dir = DMA_TO_DEVICE, \
1238+ .ee_mask = BIT(MHI_EE_FP), \
1239+ .pollcfg = 0, \
1240+ .doorbell = MHI_DB_BRST_DISABLE, \
1241+ .lpm_notify = false, \
1242+ .offload_channel = false, \
1243+ .doorbell_mode_switch = false, \
1244+ } \
1245+
1246+#define MHI_CHANNEL_CONFIG_DL_FP(ch_num, ch_name, el_count, ev_ring) \
1247+ { \
1248+ .num = ch_num, \
1249+ .name = ch_name, \
1250+ .num_elements = el_count, \
1251+ .event_ring = ev_ring, \
1252+ .dir = DMA_FROM_DEVICE, \
1253+ .ee_mask = BIT(MHI_EE_FP), \
1254+ .pollcfg = 0, \
1255+ .doorbell = MHI_DB_BRST_DISABLE, \
1256+ .lpm_notify = false, \
1257+ .offload_channel = false, \
1258+ .doorbell_mode_switch = false, \
1259+ }
1260+
1261+#define MHI_EVENT_CONFIG_DATA(ev_ring, el_count) \
1262 { \
1263- .num_elements = 128, \
1264+ .num_elements = el_count, \
1265 .irq_moderation_ms = 5, \
1266 .irq = (ev_ring) + 1, \
1267 .priority = 1, \
1268@@ -89,10 +188,10 @@ struct mhi_pci_dev_info {
1269 .offload_channel = false, \
1270 }
1271
1272-#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, ch_num) \
1273+#define MHI_EVENT_CONFIG_HW_DATA(ev_ring, el_count, ch_num) \
1274 { \
1275- .num_elements = 128, \
1276- .irq_moderation_ms = 5, \
1277+ .num_elements = el_count, \
1278+ .irq_moderation_ms = 1, \
1279 .irq = (ev_ring) + 1, \
1280 .priority = 1, \
1281 .mode = MHI_DB_BRST_DISABLE, \
1282@@ -104,33 +203,48 @@ struct mhi_pci_dev_info {
1283 }
1284
1285 static const struct mhi_channel_config modem_qcom_v1_mhi_channels[] = {
1286+ MHI_CHANNEL_CONFIG_UL(4, "DIAG", 16, 1),
1287+ MHI_CHANNEL_CONFIG_DL(5, "DIAG", 16, 1),
1288 MHI_CHANNEL_CONFIG_UL(12, "MBIM", 4, 0),
1289 MHI_CHANNEL_CONFIG_DL(13, "MBIM", 4, 0),
1290 MHI_CHANNEL_CONFIG_UL(14, "QMI", 4, 0),
1291 MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0),
1292 MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0),
1293 MHI_CHANNEL_CONFIG_DL(21, "IPCR", 8, 0),
1294- MHI_CHANNEL_CONFIG_UL(100, "IP_HW0", 128, 1),
1295- MHI_CHANNEL_CONFIG_DL(101, "IP_HW0", 128, 2),
1296+ MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
1297+ MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
1298+ MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 2),
1299+ MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0", 128, 3),
1300 };
1301
1302-static const struct mhi_event_config modem_qcom_v1_mhi_events[] = {
1303+static struct mhi_event_config modem_qcom_v1_mhi_events[] = {
1304 /* first ring is control+data ring */
1305- MHI_EVENT_CONFIG_CTRL(0),
1306+ MHI_EVENT_CONFIG_CTRL(0, 64),
1307+ /* DIAG dedicated event ring */
1308+ MHI_EVENT_CONFIG_DATA(1, 128),
1309 /* Hardware channels request dedicated hardware event rings */
1310- MHI_EVENT_CONFIG_HW_DATA(1, 100),
1311- MHI_EVENT_CONFIG_HW_DATA(2, 101)
1312+ MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
1313+ MHI_EVENT_CONFIG_HW_DATA(3, 2048, 101)
1314 };
1315
1316-static const struct mhi_controller_config modem_qcom_v1_mhiv_config = {
1317+static struct mhi_controller_config modem_qcom_v1_mhiv_config = {
1318 .max_channels = 128,
1319- .timeout_ms = 5000,
1320+ .timeout_ms = 8000,
1321 .num_channels = ARRAY_SIZE(modem_qcom_v1_mhi_channels),
1322 .ch_cfg = modem_qcom_v1_mhi_channels,
1323 .num_events = ARRAY_SIZE(modem_qcom_v1_mhi_events),
1324 .event_cfg = modem_qcom_v1_mhi_events,
1325 };
1326
1327+static const struct mhi_pci_dev_info mhi_qcom_sdx65_info = {
1328+ .name = "qcom-sdx65m",
1329+ .fw = "qcom/sdx65m/xbl.elf",
1330+ .edl = "qcom/sdx65m/edl.mbn",
1331+ .config = &modem_qcom_v1_mhiv_config,
1332+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
1333+ .dma_data_width = 32
1334+};
1335+
1336 static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
1337 .name = "qcom-sdx55m",
1338 .fw = "qcom/sdx55m/sbl1.mbn",
1339@@ -140,13 +254,131 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
1340 .dma_data_width = 32
1341 };
1342
1343+static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = {
1344+ .name = "qcom-sdx24",
1345+ .edl = "qcom/prog_firehose_sdx24.mbn",
1346+ .config = &modem_qcom_v1_mhiv_config,
1347+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
1348+ .dma_data_width = 32
1349+};
1350+
1351+static const struct mhi_channel_config mhi_quectel_em1xx_channels[] = {
1352+ MHI_CHANNEL_CONFIG_UL(0, "NMEA", 32, 0),
1353+ MHI_CHANNEL_CONFIG_DL(1, "NMEA", 32, 0),
1354+ MHI_CHANNEL_CONFIG_UL_SBL(2, "SAHARA", 32, 0),
1355+ MHI_CHANNEL_CONFIG_DL_SBL(3, "SAHARA", 32, 0),
1356+ MHI_CHANNEL_CONFIG_UL(4, "DIAG", 32, 1),
1357+ MHI_CHANNEL_CONFIG_DL(5, "DIAG", 32, 1),
1358+ MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
1359+ MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
1360+ MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0),
1361+ MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0),
1362+ /* The EDL firmware is a flash-programmer exposing firehose protocol */
1363+ MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
1364+ MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
1365+ MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
1366+ MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
1367+};
1368+
1369+static struct mhi_event_config mhi_quectel_em1xx_events[] = {
1370+ MHI_EVENT_CONFIG_CTRL(0, 128),
1371+ MHI_EVENT_CONFIG_DATA(1, 128),
1372+ MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
1373+ MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101)
1374+};
1375+
1376+static struct mhi_controller_config modem_quectel_em1xx_config = {
1377+ .max_channels = 128,
1378+ .timeout_ms = 20000,
1379+ .num_channels = ARRAY_SIZE(mhi_quectel_em1xx_channels),
1380+ .ch_cfg = mhi_quectel_em1xx_channels,
1381+ .num_events = ARRAY_SIZE(mhi_quectel_em1xx_events),
1382+ .event_cfg = mhi_quectel_em1xx_events,
1383+};
1384+
1385+static const struct mhi_pci_dev_info mhi_quectel_em1xx_info = {
1386+ .name = "quectel-em1xx",
1387+ .edl = "qcom/prog_firehose_sdx24.mbn",
1388+ .config = &modem_quectel_em1xx_config,
1389+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
1390+ .dma_data_width = 32
1391+};
1392+
1393+static const struct mhi_channel_config mhi_foxconn_sdx55_channels[] = {
1394+ MHI_CHANNEL_CONFIG_UL(0, "LOOPBACK", 32, 0),
1395+ MHI_CHANNEL_CONFIG_DL(1, "LOOPBACK", 32, 0),
1396+ MHI_CHANNEL_CONFIG_UL(4, "DIAG", 32, 1),
1397+ MHI_CHANNEL_CONFIG_DL(5, "DIAG", 32, 1),
1398+ MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
1399+ MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
1400+ MHI_CHANNEL_CONFIG_UL(32, "AT", 32, 0),
1401+ MHI_CHANNEL_CONFIG_DL(33, "AT", 32, 0),
1402+ MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
1403+ MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
1404+};
1405+
1406+static struct mhi_event_config mhi_foxconn_sdx55_events[] = {
1407+ MHI_EVENT_CONFIG_CTRL(0, 128),
1408+ MHI_EVENT_CONFIG_DATA(1, 128),
1409+ MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
1410+ MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101)
1411+};
1412+
1413+static struct mhi_controller_config modem_foxconn_sdx55_config = {
1414+ .max_channels = 128,
1415+ .timeout_ms = 20000,
1416+ .num_channels = ARRAY_SIZE(mhi_foxconn_sdx55_channels),
1417+ .ch_cfg = mhi_foxconn_sdx55_channels,
1418+ .num_events = ARRAY_SIZE(mhi_foxconn_sdx55_events),
1419+ .event_cfg = mhi_foxconn_sdx55_events,
1420+};
1421+
1422+static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
1423+ .name = "foxconn-sdx55",
1424+ .fw = "qcom/sdx55m/sbl1.mbn",
1425+ .edl = "qcom/sdx55m/edl.mbn",
1426+ .config = &modem_foxconn_sdx55_config,
1427+ .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
1428+ .dma_data_width = 32
1429+};
1430+
1431 static const struct pci_device_id mhi_pci_id_table[] = {
1432 { PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0306),
1433 .driver_data = (kernel_ulong_t) &mhi_qcom_sdx55_info },
1434+ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
1435+ .driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
1436+ { PCI_DEVICE(0x1eac, 0x1001), /* EM120R-GL (sdx24) */
1437+ .driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
1438+ { PCI_DEVICE(0x1eac, 0x1002), /* EM160R-GL (sdx24) */
1439+ .driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
1440+ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
1441+ .driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
1442+ /* T99W175 (sdx55), Both for eSIM and Non-eSIM */
1443+ { PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0ab),
1444+ .driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
1445+ /* DW5930e (sdx55), With eSIM, It's also T99W175 */
1446+ { PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0b0),
1447+ .driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
1448+ /* DW5930e (sdx55), Non-eSIM, It's also T99W175 */
1449+ { PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0b1),
1450+ .driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
1451 { }
1452 };
1453 MODULE_DEVICE_TABLE(pci, mhi_pci_id_table);
1454
1455+enum mhi_pci_device_status {
1456+ MHI_PCI_DEV_STARTED,
1457+ MHI_PCI_DEV_SUSPENDED,
1458+};
1459+
1460+struct mhi_pci_device {
1461+ struct mhi_controller mhi_cntrl;
1462+ struct pci_saved_state *pci_state;
1463+ struct work_struct recovery_work;
1464+ struct timer_list health_check_timer;
1465+ unsigned long status;
1466+};
1467+
1468 static int mhi_pci_read_reg(struct mhi_controller *mhi_cntrl,
1469 void __iomem *addr, u32 *out)
1470 {
1471@@ -163,7 +395,50 @@ static void mhi_pci_write_reg(struct mhi_controller *mhi_cntrl,
1472 static void mhi_pci_status_cb(struct mhi_controller *mhi_cntrl,
1473 enum mhi_callback cb)
1474 {
1475+ struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
1476+
1477 /* Nothing to do for now */
1478+ switch (cb) {
1479+ case MHI_CB_FATAL_ERROR:
1480+ case MHI_CB_SYS_ERROR:
1481+ dev_warn(&pdev->dev, "firmware crashed (%u)\n", cb);
1482+ pm_runtime_forbid(&pdev->dev);
1483+ break;
1484+ case MHI_CB_EE_MISSION_MODE:
1485+ pm_runtime_allow(&pdev->dev);
1486+ break;
1487+ default:
1488+ break;
1489+ }
1490+}
1491+
1492+static void mhi_pci_wake_get_nop(struct mhi_controller *mhi_cntrl, bool force)
1493+{
1494+ /* no-op */
1495+}
1496+
1497+static void mhi_pci_wake_put_nop(struct mhi_controller *mhi_cntrl, bool override)
1498+{
1499+ /* no-op */
1500+}
1501+
1502+static void mhi_pci_wake_toggle_nop(struct mhi_controller *mhi_cntrl)
1503+{
1504+ /* no-op */
1505+}
1506+
1507+static bool mhi_pci_is_alive(struct mhi_controller *mhi_cntrl)
1508+{
1509+ struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
1510+ u16 vendor = 0;
1511+
1512+ if (pci_read_config_word(pdev, PCI_VENDOR_ID, &vendor))
1513+ return false;
1514+
1515+ if (vendor == (u16) ~0 || vendor == 0)
1516+ return false;
1517+
1518+ return true;
1519 }
1520
1521 static int mhi_pci_claim(struct mhi_controller *mhi_cntrl,
1522@@ -227,8 +502,12 @@ static int mhi_pci_get_irqs(struct mhi_controller *mhi_cntrl,
1523 }
1524
1525 if (nr_vectors < mhi_cntrl->nr_irqs) {
1526- dev_warn(&pdev->dev, "Not enough MSI vectors (%d/%d), use shared MSI\n",
1527- nr_vectors, mhi_cntrl_config->num_events);
1528+ dev_warn(&pdev->dev, "using shared MSI\n");
1529+
1530+ /* Patch msi vectors, use only one (shared) */
1531+ for (i = 0; i < mhi_cntrl_config->num_events; i++)
1532+ mhi_cntrl_config->event_cfg[i].irq = 0;
1533+ mhi_cntrl->nr_irqs = 1;
1534 }
1535
1536 irq = devm_kcalloc(&pdev->dev, mhi_cntrl->nr_irqs, sizeof(int), GFP_KERNEL);
1537@@ -248,29 +527,108 @@ static int mhi_pci_get_irqs(struct mhi_controller *mhi_cntrl,
1538
1539 static int mhi_pci_runtime_get(struct mhi_controller *mhi_cntrl)
1540 {
1541- /* no PM for now */
1542- return 0;
1543+ /* The runtime_get() MHI callback means:
1544+ * Do whatever is requested to leave M3.
1545+ */
1546+ return pm_runtime_get(mhi_cntrl->cntrl_dev);
1547 }
1548
1549 static void mhi_pci_runtime_put(struct mhi_controller *mhi_cntrl)
1550 {
1551- /* no PM for now */
1552+ /* The runtime_put() MHI callback means:
1553+ * Device can be moved in M3 state.
1554+ */
1555+ pm_runtime_mark_last_busy(mhi_cntrl->cntrl_dev);
1556+ pm_runtime_put(mhi_cntrl->cntrl_dev);
1557+}
1558+
1559+static void mhi_pci_recovery_work(struct work_struct *work)
1560+{
1561+ struct mhi_pci_device *mhi_pdev = container_of(work, struct mhi_pci_device,
1562+ recovery_work);
1563+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1564+ struct pci_dev *pdev = to_pci_dev(mhi_cntrl->cntrl_dev);
1565+ int err;
1566+
1567+ dev_warn(&pdev->dev, "device recovery started\n");
1568+
1569+ del_timer(&mhi_pdev->health_check_timer);
1570+ pm_runtime_forbid(&pdev->dev);
1571+
1572+ /* Clean up MHI state */
1573+ if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
1574+ mhi_power_down(mhi_cntrl, false);
1575+ mhi_unprepare_after_power_down(mhi_cntrl);
1576+ }
1577+
1578+ pci_set_power_state(pdev, PCI_D0);
1579+ pci_load_saved_state(pdev, mhi_pdev->pci_state);
1580+ pci_restore_state(pdev);
1581+
1582+ if (!mhi_pci_is_alive(mhi_cntrl))
1583+ goto err_try_reset;
1584+
1585+ err = mhi_prepare_for_power_up(mhi_cntrl);
1586+ if (err)
1587+ goto err_try_reset;
1588+
1589+ err = mhi_sync_power_up(mhi_cntrl);
1590+ if (err)
1591+ goto err_unprepare;
1592+
1593+ dev_dbg(&pdev->dev, "Recovery completed\n");
1594+
1595+ set_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status);
1596+ mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
1597+ return;
1598+
1599+err_unprepare:
1600+ mhi_unprepare_after_power_down(mhi_cntrl);
1601+err_try_reset:
1602+ if (pci_reset_function(pdev))
1603+ dev_err(&pdev->dev, "Recovery failed\n");
1604+}
1605+
1606+static void health_check(struct timer_list *t)
1607+{
1608+ struct mhi_pci_device *mhi_pdev = from_timer(mhi_pdev, t, health_check_timer);
1609+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1610+
1611+ if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
1612+ test_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
1613+ return;
1614+
1615+ if (!mhi_pci_is_alive(mhi_cntrl)) {
1616+ dev_err(mhi_cntrl->cntrl_dev, "Device died\n");
1617+ queue_work(system_long_wq, &mhi_pdev->recovery_work);
1618+ return;
1619+ }
1620+
1621+ /* reschedule in two seconds */
1622+ mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
1623 }
1624
1625 static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
1626 {
1627 const struct mhi_pci_dev_info *info = (struct mhi_pci_dev_info *) id->driver_data;
1628 const struct mhi_controller_config *mhi_cntrl_config;
1629+ struct mhi_pci_device *mhi_pdev;
1630 struct mhi_controller *mhi_cntrl;
1631 int err;
1632
1633 dev_dbg(&pdev->dev, "MHI PCI device found: %s\n", info->name);
1634
1635- mhi_cntrl = mhi_alloc_controller();
1636- if (!mhi_cntrl)
1637+ /* mhi_pdev.mhi_cntrl must be zero-initialized */
1638+ mhi_pdev = devm_kzalloc(&pdev->dev, sizeof(*mhi_pdev), GFP_KERNEL);
1639+ if (!mhi_pdev)
1640 return -ENOMEM;
1641
1642+ INIT_WORK(&mhi_pdev->recovery_work, mhi_pci_recovery_work);
1643+ timer_setup(&mhi_pdev->health_check_timer, health_check, 0);
1644+
1645 mhi_cntrl_config = info->config;
1646+ mhi_cntrl = &mhi_pdev->mhi_cntrl;
1647+
1648 mhi_cntrl->cntrl_dev = &pdev->dev;
1649 mhi_cntrl->iova_start = 0;
1650 mhi_cntrl->iova_stop = (dma_addr_t)DMA_BIT_MASK(info->dma_data_width);
1651@@ -282,20 +640,32 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
1652 mhi_cntrl->status_cb = mhi_pci_status_cb;
1653 mhi_cntrl->runtime_get = mhi_pci_runtime_get;
1654 mhi_cntrl->runtime_put = mhi_pci_runtime_put;
1655+ mhi_cntrl->wake_get = mhi_pci_wake_get_nop;
1656+ mhi_cntrl->wake_put = mhi_pci_wake_put_nop;
1657+ mhi_cntrl->wake_toggle = mhi_pci_wake_toggle_nop;
1658
1659 err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width));
1660 if (err)
1661- goto err_release;
1662+ return err;
1663
1664 err = mhi_pci_get_irqs(mhi_cntrl, mhi_cntrl_config);
1665 if (err)
1666- goto err_release;
1667+ return err;
1668
1669- pci_set_drvdata(pdev, mhi_cntrl);
1670+ pci_set_drvdata(pdev, mhi_pdev);
1671+
1672+ /* Have stored pci confspace at hand for restore in sudden PCI error.
1673+ * cache the state locally and discard the PCI core one.
1674+ */
1675+ pci_save_state(pdev);
1676+ mhi_pdev->pci_state = pci_store_saved_state(pdev);
1677+ pci_load_saved_state(pdev, NULL);
1678+
1679+ pci_enable_pcie_error_reporting(pdev);
1680
1681 err = mhi_register_controller(mhi_cntrl, mhi_cntrl_config);
1682 if (err)
1683- goto err_release;
1684+ return err;
1685
1686 /* MHI bus does not power up the controller by default */
1687 err = mhi_prepare_for_power_up(mhi_cntrl);
1688@@ -310,33 +680,274 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
1689 goto err_unprepare;
1690 }
1691
1692+ set_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status);
1693+
1694+ /* start health check */
1695+ mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
1696+
1697+ /* Only allow runtime-suspend if PME capable (for wakeup) */
1698+ if (pci_pme_capable(pdev, PCI_D3hot)) {
1699+ pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
1700+ pm_runtime_use_autosuspend(&pdev->dev);
1701+ pm_runtime_mark_last_busy(&pdev->dev);
1702+ pm_runtime_put_noidle(&pdev->dev);
1703+ }
1704+
1705 return 0;
1706
1707 err_unprepare:
1708 mhi_unprepare_after_power_down(mhi_cntrl);
1709 err_unregister:
1710 mhi_unregister_controller(mhi_cntrl);
1711-err_release:
1712- mhi_free_controller(mhi_cntrl);
1713
1714 return err;
1715 }
1716
1717 static void mhi_pci_remove(struct pci_dev *pdev)
1718 {
1719- struct mhi_controller *mhi_cntrl = pci_get_drvdata(pdev);
1720+ struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev);
1721+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1722+
1723+ del_timer(&mhi_pdev->health_check_timer);
1724+ cancel_work_sync(&mhi_pdev->recovery_work);
1725+
1726+ if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
1727+ mhi_power_down(mhi_cntrl, true);
1728+ mhi_unprepare_after_power_down(mhi_cntrl);
1729+ }
1730+
1731+ /* balancing probe put_noidle */
1732+ if (pci_pme_capable(pdev, PCI_D3hot))
1733+ pm_runtime_get_noresume(&pdev->dev);
1734
1735- mhi_power_down(mhi_cntrl, true);
1736- mhi_unprepare_after_power_down(mhi_cntrl);
1737 mhi_unregister_controller(mhi_cntrl);
1738- mhi_free_controller(mhi_cntrl);
1739 }
1740
1741+static void mhi_pci_shutdown(struct pci_dev *pdev)
1742+{
1743+ mhi_pci_remove(pdev);
1744+ pci_set_power_state(pdev, PCI_D3hot);
1745+}
1746+
1747+static void mhi_pci_reset_prepare(struct pci_dev *pdev)
1748+{
1749+ struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev);
1750+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1751+
1752+ dev_info(&pdev->dev, "reset\n");
1753+
1754+ del_timer(&mhi_pdev->health_check_timer);
1755+
1756+ /* Clean up MHI state */
1757+ if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
1758+ mhi_power_down(mhi_cntrl, false);
1759+ mhi_unprepare_after_power_down(mhi_cntrl);
1760+ }
1761+
1762+ /* cause internal device reset */
1763+ mhi_soc_reset(mhi_cntrl);
1764+
1765+ /* Be sure device reset has been executed */
1766+ msleep(MHI_POST_RESET_DELAY_MS);
1767+}
1768+
1769+static void mhi_pci_reset_done(struct pci_dev *pdev)
1770+{
1771+ struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev);
1772+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1773+ int err;
1774+
1775+ /* Restore initial known working PCI state */
1776+ pci_load_saved_state(pdev, mhi_pdev->pci_state);
1777+ pci_restore_state(pdev);
1778+
1779+ /* Is device status available ? */
1780+ if (!mhi_pci_is_alive(mhi_cntrl)) {
1781+ dev_err(&pdev->dev, "reset failed\n");
1782+ return;
1783+ }
1784+
1785+ err = mhi_prepare_for_power_up(mhi_cntrl);
1786+ if (err) {
1787+ dev_err(&pdev->dev, "failed to prepare MHI controller\n");
1788+ return;
1789+ }
1790+
1791+ err = mhi_sync_power_up(mhi_cntrl);
1792+ if (err) {
1793+ dev_err(&pdev->dev, "failed to power up MHI controller\n");
1794+ mhi_unprepare_after_power_down(mhi_cntrl);
1795+ return;
1796+ }
1797+
1798+ set_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status);
1799+ mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
1800+}
1801+
1802+static pci_ers_result_t mhi_pci_error_detected(struct pci_dev *pdev,
1803+ pci_channel_state_t state)
1804+{
1805+ struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev);
1806+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1807+
1808+ dev_err(&pdev->dev, "PCI error detected, state = %u\n", state);
1809+
1810+ if (state == pci_channel_io_perm_failure)
1811+ return PCI_ERS_RESULT_DISCONNECT;
1812+
1813+ /* Clean up MHI state */
1814+ if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) {
1815+ mhi_power_down(mhi_cntrl, false);
1816+ mhi_unprepare_after_power_down(mhi_cntrl);
1817+ } else {
1818+ /* Nothing to do */
1819+ return PCI_ERS_RESULT_RECOVERED;
1820+ }
1821+
1822+ pci_disable_device(pdev);
1823+
1824+ return PCI_ERS_RESULT_NEED_RESET;
1825+}
1826+
1827+static pci_ers_result_t mhi_pci_slot_reset(struct pci_dev *pdev)
1828+{
1829+ if (pci_enable_device(pdev)) {
1830+ dev_err(&pdev->dev, "Cannot re-enable PCI device after reset.\n");
1831+ return PCI_ERS_RESULT_DISCONNECT;
1832+ }
1833+
1834+ return PCI_ERS_RESULT_RECOVERED;
1835+}
1836+
1837+static void mhi_pci_io_resume(struct pci_dev *pdev)
1838+{
1839+ struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev);
1840+
1841+ dev_err(&pdev->dev, "PCI slot reset done\n");
1842+
1843+ queue_work(system_long_wq, &mhi_pdev->recovery_work);
1844+}
1845+
1846+static const struct pci_error_handlers mhi_pci_err_handler = {
1847+ .error_detected = mhi_pci_error_detected,
1848+ .slot_reset = mhi_pci_slot_reset,
1849+ .resume = mhi_pci_io_resume,
1850+ .reset_prepare = mhi_pci_reset_prepare,
1851+ .reset_done = mhi_pci_reset_done,
1852+};
1853+
1854+static int __maybe_unused mhi_pci_runtime_suspend(struct device *dev)
1855+{
1856+ struct pci_dev *pdev = to_pci_dev(dev);
1857+ struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
1858+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1859+ int err;
1860+
1861+ if (test_and_set_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
1862+ return 0;
1863+
1864+ del_timer(&mhi_pdev->health_check_timer);
1865+ cancel_work_sync(&mhi_pdev->recovery_work);
1866+
1867+ if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
1868+ mhi_cntrl->ee != MHI_EE_AMSS)
1869+ goto pci_suspend; /* Nothing to do at MHI level */
1870+
1871+ /* Transition to M3 state */
1872+ err = mhi_pm_suspend(mhi_cntrl);
1873+ if (err) {
1874+ dev_err(&pdev->dev, "failed to suspend device: %d\n", err);
1875+ clear_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status);
1876+ return -EBUSY;
1877+ }
1878+
1879+pci_suspend:
1880+ pci_disable_device(pdev);
1881+ pci_wake_from_d3(pdev, true);
1882+
1883+ return 0;
1884+}
1885+
1886+static int __maybe_unused mhi_pci_runtime_resume(struct device *dev)
1887+{
1888+ struct pci_dev *pdev = to_pci_dev(dev);
1889+ struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev);
1890+ struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl;
1891+ int err;
1892+
1893+ if (!test_and_clear_bit(MHI_PCI_DEV_SUSPENDED, &mhi_pdev->status))
1894+ return 0;
1895+
1896+ err = pci_enable_device(pdev);
1897+ if (err)
1898+ goto err_recovery;
1899+
1900+ pci_set_master(pdev);
1901+ pci_wake_from_d3(pdev, false);
1902+
1903+ if (!test_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status) ||
1904+ mhi_cntrl->ee != MHI_EE_AMSS)
1905+ return 0; /* Nothing to do at MHI level */
1906+
1907+ /* Exit M3, transition to M0 state */
1908+ err = mhi_pm_resume(mhi_cntrl);
1909+ if (err) {
1910+ dev_err(&pdev->dev, "failed to resume device: %d\n", err);
1911+ goto err_recovery;
1912+ }
1913+
1914+ /* Resume health check */
1915+ mod_timer(&mhi_pdev->health_check_timer, jiffies + HEALTH_CHECK_PERIOD);
1916+
1917+ /* It can be a remote wakeup (no mhi runtime_get), update access time */
1918+ pm_runtime_mark_last_busy(dev);
1919+
1920+ return 0;
1921+
1922+err_recovery:
1923+ /* Do not fail to not mess up our PCI device state, the device likely
1924+ * lost power (d3cold) and we simply need to reset it from the recovery
1925+ * procedure, trigger the recovery asynchronously to prevent system
1926+ * suspend exit delaying.
1927+ */
1928+ queue_work(system_long_wq, &mhi_pdev->recovery_work);
1929+ pm_runtime_mark_last_busy(dev);
1930+
1931+ return 0;
1932+}
1933+
1934+static int __maybe_unused mhi_pci_suspend(struct device *dev)
1935+{
1936+ pm_runtime_disable(dev);
1937+ return mhi_pci_runtime_suspend(dev);
1938+}
1939+
1940+static int __maybe_unused mhi_pci_resume(struct device *dev)
1941+{
1942+ int ret;
1943+
1944+ /* Depending the platform, device may have lost power (d3cold), we need
1945+ * to resume it now to check its state and recover when necessary.
1946+ */
1947+ ret = mhi_pci_runtime_resume(dev);
1948+ pm_runtime_enable(dev);
1949+
1950+ return ret;
1951+}
1952+
1953+static const struct dev_pm_ops mhi_pci_pm_ops = {
1954+ SET_RUNTIME_PM_OPS(mhi_pci_runtime_suspend, mhi_pci_runtime_resume, NULL)
1955+ SET_SYSTEM_SLEEP_PM_OPS(mhi_pci_suspend, mhi_pci_resume)
1956+};
1957+
1958 static struct pci_driver mhi_pci_driver = {
1959 .name = "mhi-pci-generic",
1960 .id_table = mhi_pci_id_table,
1961 .probe = mhi_pci_probe,
1962- .remove = mhi_pci_remove
1963+ .remove = mhi_pci_remove,
1964+ .shutdown = mhi_pci_shutdown,
1965+ .err_handler = &mhi_pci_err_handler,
1966+ .driver.pm = &mhi_pci_pm_ops
1967 };
1968 module_pci_driver(mhi_pci_driver);
1969
1970diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
1971index 6d9e908..1417953 100644
1972--- a/drivers/net/Kconfig
1973+++ b/drivers/net/Kconfig
1974@@ -500,6 +500,8 @@ source "drivers/net/wan/Kconfig"
1975
1976 source "drivers/net/ieee802154/Kconfig"
1977
1978+source "drivers/net/wwan/Kconfig"
1979+
1980 config XEN_NETDEV_FRONTEND
1981 tristate "Xen network device frontend driver"
1982 depends on XEN
1983diff --git a/drivers/net/Makefile b/drivers/net/Makefile
1984index 36e2e41..5da6424 100644
1985--- a/drivers/net/Makefile
1986+++ b/drivers/net/Makefile
1987@@ -36,7 +36,7 @@ obj-$(CONFIG_GTP) += gtp.o
1988 obj-$(CONFIG_NLMON) += nlmon.o
1989 obj-$(CONFIG_NET_VRF) += vrf.o
1990 obj-$(CONFIG_VSOCKMON) += vsockmon.o
1991-obj-$(CONFIG_MHI_NET) += mhi_net.o
1992+obj-$(CONFIG_MHI_NET) += mhi/
1993
1994 #
1995 # Networking Drivers
1996@@ -68,6 +68,7 @@ obj-$(CONFIG_SUNGEM_PHY) += sungem_phy.o
1997 obj-$(CONFIG_WAN) += wan/
1998 obj-$(CONFIG_WLAN) += wireless/
1999 obj-$(CONFIG_IEEE802154) += ieee802154/
2000+obj-$(CONFIG_WWAN) += wwan/
2001
2002 obj-$(CONFIG_VMXNET3) += vmxnet3/
2003 obj-$(CONFIG_XEN_NETDEV_FRONTEND) += xen-netfront.o
2004diff --git a/drivers/net/mhi/Makefile b/drivers/net/mhi/Makefile
2005new file mode 100644
2006index 0000000..f71b9f8
2007--- /dev/null
2008+++ b/drivers/net/mhi/Makefile
2009@@ -0,0 +1,3 @@
2010+obj-$(CONFIG_MHI_NET) += mhi_net.o
2011+
2012+mhi_net-y := net.o proto_mbim.o
2013diff --git a/drivers/net/mhi/mhi.h b/drivers/net/mhi/mhi.h
2014new file mode 100644
2015index 0000000..1d0c499
2016--- /dev/null
2017+++ b/drivers/net/mhi/mhi.h
2018@@ -0,0 +1,41 @@
2019+/* SPDX-License-Identifier: GPL-2.0-or-later */
2020+/* MHI Network driver - Network over MHI bus
2021+ *
2022+ * Copyright (C) 2021 Linaro Ltd <loic.poulain@linaro.org>
2023+ */
2024+
2025+struct mhi_net_stats {
2026+ u64_stats_t rx_packets;
2027+ u64_stats_t rx_bytes;
2028+ u64_stats_t rx_errors;
2029+ u64_stats_t rx_dropped;
2030+ u64_stats_t rx_length_errors;
2031+ u64_stats_t tx_packets;
2032+ u64_stats_t tx_bytes;
2033+ u64_stats_t tx_errors;
2034+ u64_stats_t tx_dropped;
2035+ struct u64_stats_sync tx_syncp;
2036+ struct u64_stats_sync rx_syncp;
2037+};
2038+
2039+struct mhi_net_dev {
2040+ struct mhi_device *mdev;
2041+ struct net_device *ndev;
2042+ struct sk_buff *skbagg_head;
2043+ struct sk_buff *skbagg_tail;
2044+ const struct mhi_net_proto *proto;
2045+ void *proto_data;
2046+ struct delayed_work rx_refill;
2047+ struct mhi_net_stats stats;
2048+ u32 rx_queue_sz;
2049+ int msg_enable;
2050+ unsigned int mru;
2051+};
2052+
2053+struct mhi_net_proto {
2054+ int (*init)(struct mhi_net_dev *mhi_netdev);
2055+ struct sk_buff * (*tx_fixup)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb);
2056+ void (*rx)(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb);
2057+};
2058+
2059+extern const struct mhi_net_proto proto_mbim;
2060diff --git a/drivers/net/mhi_net.c b/drivers/net/mhi/net.c
2061similarity index 57%
2062rename from drivers/net/mhi_net.c
2063rename to drivers/net/mhi/net.c
2064index fa41d8c..5ec7a29 100644
2065--- a/drivers/net/mhi_net.c
2066+++ b/drivers/net/mhi/net.c
2067@@ -12,30 +12,15 @@
2068 #include <linux/skbuff.h>
2069 #include <linux/u64_stats_sync.h>
2070
2071+#include "mhi.h"
2072+
2073 #define MHI_NET_MIN_MTU ETH_MIN_MTU
2074 #define MHI_NET_MAX_MTU 0xffff
2075 #define MHI_NET_DEFAULT_MTU 0x4000
2076
2077-struct mhi_net_stats {
2078- u64_stats_t rx_packets;
2079- u64_stats_t rx_bytes;
2080- u64_stats_t rx_errors;
2081- u64_stats_t rx_dropped;
2082- u64_stats_t tx_packets;
2083- u64_stats_t tx_bytes;
2084- u64_stats_t tx_errors;
2085- u64_stats_t tx_dropped;
2086- atomic_t rx_queued;
2087- struct u64_stats_sync tx_syncp;
2088- struct u64_stats_sync rx_syncp;
2089-};
2090-
2091-struct mhi_net_dev {
2092- struct mhi_device *mdev;
2093- struct net_device *ndev;
2094- struct delayed_work rx_refill;
2095- struct mhi_net_stats stats;
2096- u32 rx_queue_sz;
2097+struct mhi_device_info {
2098+ const char *netname;
2099+ const struct mhi_net_proto *proto;
2100 };
2101
2102 static int mhi_ndo_open(struct net_device *ndev)
2103@@ -67,26 +52,35 @@ static int mhi_ndo_stop(struct net_device *ndev)
2104 static int mhi_ndo_xmit(struct sk_buff *skb, struct net_device *ndev)
2105 {
2106 struct mhi_net_dev *mhi_netdev = netdev_priv(ndev);
2107+ const struct mhi_net_proto *proto = mhi_netdev->proto;
2108 struct mhi_device *mdev = mhi_netdev->mdev;
2109 int err;
2110
2111+ if (proto && proto->tx_fixup) {
2112+ skb = proto->tx_fixup(mhi_netdev, skb);
2113+ if (unlikely(!skb))
2114+ goto exit_drop;
2115+ }
2116+
2117 err = mhi_queue_skb(mdev, DMA_TO_DEVICE, skb, skb->len, MHI_EOT);
2118 if (unlikely(err)) {
2119 net_err_ratelimited("%s: Failed to queue TX buf (%d)\n",
2120 ndev->name, err);
2121-
2122- u64_stats_update_begin(&mhi_netdev->stats.tx_syncp);
2123- u64_stats_inc(&mhi_netdev->stats.tx_dropped);
2124- u64_stats_update_end(&mhi_netdev->stats.tx_syncp);
2125-
2126- /* drop the packet */
2127 dev_kfree_skb_any(skb);
2128+ goto exit_drop;
2129 }
2130
2131 if (mhi_queue_is_full(mdev, DMA_TO_DEVICE))
2132 netif_stop_queue(ndev);
2133
2134 return NETDEV_TX_OK;
2135+
2136+exit_drop:
2137+ u64_stats_update_begin(&mhi_netdev->stats.tx_syncp);
2138+ u64_stats_inc(&mhi_netdev->stats.tx_dropped);
2139+ u64_stats_update_end(&mhi_netdev->stats.tx_syncp);
2140+
2141+ return NETDEV_TX_OK;
2142 }
2143
2144 static void mhi_ndo_get_stats64(struct net_device *ndev,
2145@@ -101,6 +95,7 @@ static void mhi_ndo_get_stats64(struct net_device *ndev,
2146 stats->rx_bytes = u64_stats_read(&mhi_netdev->stats.rx_bytes);
2147 stats->rx_errors = u64_stats_read(&mhi_netdev->stats.rx_errors);
2148 stats->rx_dropped = u64_stats_read(&mhi_netdev->stats.rx_dropped);
2149+ stats->rx_length_errors = u64_stats_read(&mhi_netdev->stats.rx_length_errors);
2150 } while (u64_stats_fetch_retry_irq(&mhi_netdev->stats.rx_syncp, start));
2151
2152 do {
2153@@ -122,7 +117,7 @@ static const struct net_device_ops mhi_netdev_ops = {
2154 static void mhi_net_setup(struct net_device *ndev)
2155 {
2156 ndev->header_ops = NULL; /* No header */
2157- ndev->type = ARPHRD_NONE; /* QMAP... */
2158+ ndev->type = ARPHRD_RAWIP;
2159 ndev->hard_header_len = 0;
2160 ndev->addr_len = 0;
2161 ndev->flags = IFF_POINTOPOINT | IFF_NOARP;
2162@@ -133,38 +128,101 @@ static void mhi_net_setup(struct net_device *ndev)
2163 ndev->tx_queue_len = 1000;
2164 }
2165
2166+static struct sk_buff *mhi_net_skb_agg(struct mhi_net_dev *mhi_netdev,
2167+ struct sk_buff *skb)
2168+{
2169+ struct sk_buff *head = mhi_netdev->skbagg_head;
2170+ struct sk_buff *tail = mhi_netdev->skbagg_tail;
2171+
2172+ /* This is non-paged skb chaining using frag_list */
2173+ if (!head) {
2174+ mhi_netdev->skbagg_head = skb;
2175+ return skb;
2176+ }
2177+
2178+ if (!skb_shinfo(head)->frag_list)
2179+ skb_shinfo(head)->frag_list = skb;
2180+ else
2181+ tail->next = skb;
2182+
2183+ head->len += skb->len;
2184+ head->data_len += skb->len;
2185+ head->truesize += skb->truesize;
2186+
2187+ mhi_netdev->skbagg_tail = skb;
2188+
2189+ return mhi_netdev->skbagg_head;
2190+}
2191+
2192 static void mhi_net_dl_callback(struct mhi_device *mhi_dev,
2193 struct mhi_result *mhi_res)
2194 {
2195 struct mhi_net_dev *mhi_netdev = dev_get_drvdata(&mhi_dev->dev);
2196+ const struct mhi_net_proto *proto = mhi_netdev->proto;
2197 struct sk_buff *skb = mhi_res->buf_addr;
2198- int remaining;
2199+ int free_desc_count;
2200
2201- remaining = atomic_dec_return(&mhi_netdev->stats.rx_queued);
2202+ free_desc_count = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE);
2203
2204 if (unlikely(mhi_res->transaction_status)) {
2205- dev_kfree_skb_any(skb);
2206-
2207- /* MHI layer stopping/resetting the DL channel */
2208- if (mhi_res->transaction_status == -ENOTCONN)
2209+ switch (mhi_res->transaction_status) {
2210+ case -EOVERFLOW:
2211+ /* Packet can not fit in one MHI buffer and has been
2212+ * split over multiple MHI transfers, do re-aggregation.
2213+ * That usually means the device side MTU is larger than
2214+ * the host side MTU/MRU. Since this is not optimal,
2215+ * print a warning (once).
2216+ */
2217+ netdev_warn_once(mhi_netdev->ndev,
2218+ "Fragmented packets received, fix MTU?\n");
2219+ skb_put(skb, mhi_res->bytes_xferd);
2220+ mhi_net_skb_agg(mhi_netdev, skb);
2221+ break;
2222+ case -ENOTCONN:
2223+ /* MHI layer stopping/resetting the DL channel */
2224+ dev_kfree_skb_any(skb);
2225 return;
2226-
2227- u64_stats_update_begin(&mhi_netdev->stats.rx_syncp);
2228- u64_stats_inc(&mhi_netdev->stats.rx_errors);
2229- u64_stats_update_end(&mhi_netdev->stats.rx_syncp);
2230+ default:
2231+ /* Unknown error, simply drop */
2232+ dev_kfree_skb_any(skb);
2233+ u64_stats_update_begin(&mhi_netdev->stats.rx_syncp);
2234+ u64_stats_inc(&mhi_netdev->stats.rx_errors);
2235+ u64_stats_update_end(&mhi_netdev->stats.rx_syncp);
2236+ }
2237 } else {
2238+ skb_put(skb, mhi_res->bytes_xferd);
2239+
2240+ if (mhi_netdev->skbagg_head) {
2241+ /* Aggregate the final fragment */
2242+ skb = mhi_net_skb_agg(mhi_netdev, skb);
2243+ mhi_netdev->skbagg_head = NULL;
2244+ }
2245+
2246 u64_stats_update_begin(&mhi_netdev->stats.rx_syncp);
2247 u64_stats_inc(&mhi_netdev->stats.rx_packets);
2248- u64_stats_add(&mhi_netdev->stats.rx_bytes, mhi_res->bytes_xferd);
2249+ u64_stats_add(&mhi_netdev->stats.rx_bytes, skb->len);
2250 u64_stats_update_end(&mhi_netdev->stats.rx_syncp);
2251
2252- skb->protocol = htons(ETH_P_MAP);
2253- skb_put(skb, mhi_res->bytes_xferd);
2254- netif_rx(skb);
2255+ switch (skb->data[0] & 0xf0) {
2256+ case 0x40:
2257+ skb->protocol = htons(ETH_P_IP);
2258+ break;
2259+ case 0x60:
2260+ skb->protocol = htons(ETH_P_IPV6);
2261+ break;
2262+ default:
2263+ skb->protocol = htons(ETH_P_MAP);
2264+ break;
2265+ }
2266+
2267+ if (proto && proto->rx)
2268+ proto->rx(mhi_netdev, skb);
2269+ else
2270+ netif_rx(skb);
2271 }
2272
2273 /* Refill if RX buffers queue becomes low */
2274- if (remaining <= mhi_netdev->rx_queue_sz / 2)
2275+ if (free_desc_count >= mhi_netdev->rx_queue_sz / 2)
2276 schedule_delayed_work(&mhi_netdev->rx_refill, 0);
2277 }
2278
2279@@ -207,11 +265,13 @@ static void mhi_net_rx_refill_work(struct work_struct *work)
2280 rx_refill.work);
2281 struct net_device *ndev = mhi_netdev->ndev;
2282 struct mhi_device *mdev = mhi_netdev->mdev;
2283- int size = READ_ONCE(ndev->mtu);
2284 struct sk_buff *skb;
2285+ unsigned int size;
2286 int err;
2287
2288- while (atomic_read(&mhi_netdev->stats.rx_queued) < mhi_netdev->rx_queue_sz) {
2289+ size = mhi_netdev->mru ? mhi_netdev->mru : READ_ONCE(ndev->mtu);
2290+
2291+ while (!mhi_queue_is_full(mdev, DMA_FROM_DEVICE)) {
2292 skb = netdev_alloc_skb(ndev, size);
2293 if (unlikely(!skb))
2294 break;
2295@@ -224,8 +284,6 @@ static void mhi_net_rx_refill_work(struct work_struct *work)
2296 break;
2297 }
2298
2299- atomic_inc(&mhi_netdev->stats.rx_queued);
2300-
2301 /* Do not hog the CPU if rx buffers are consumed faster than
2302 * queued (unlikely).
2303 */
2304@@ -233,21 +291,25 @@ static void mhi_net_rx_refill_work(struct work_struct *work)
2305 }
2306
2307 /* If we're still starved of rx buffers, reschedule later */
2308- if (unlikely(!atomic_read(&mhi_netdev->stats.rx_queued)))
2309+ if (mhi_get_free_desc_count(mdev, DMA_FROM_DEVICE) == mhi_netdev->rx_queue_sz)
2310 schedule_delayed_work(&mhi_netdev->rx_refill, HZ / 2);
2311 }
2312
2313+static struct device_type wwan_type = {
2314+ .name = "wwan",
2315+};
2316+
2317 static int mhi_net_probe(struct mhi_device *mhi_dev,
2318 const struct mhi_device_id *id)
2319 {
2320- const char *netname = (char *)id->driver_data;
2321+ const struct mhi_device_info *info = (struct mhi_device_info *)id->driver_data;
2322 struct device *dev = &mhi_dev->dev;
2323 struct mhi_net_dev *mhi_netdev;
2324 struct net_device *ndev;
2325 int err;
2326
2327- ndev = alloc_netdev(sizeof(*mhi_netdev), netname, NET_NAME_PREDICTABLE,
2328- mhi_net_setup);
2329+ ndev = alloc_netdev(sizeof(*mhi_netdev), info->netname,
2330+ NET_NAME_PREDICTABLE, mhi_net_setup);
2331 if (!ndev)
2332 return -ENOMEM;
2333
2334@@ -255,10 +317,10 @@ static int mhi_net_probe(struct mhi_device *mhi_dev,
2335 dev_set_drvdata(dev, mhi_netdev);
2336 mhi_netdev->ndev = ndev;
2337 mhi_netdev->mdev = mhi_dev;
2338+ mhi_netdev->skbagg_head = NULL;
2339+ mhi_netdev->proto = info->proto;
2340 SET_NETDEV_DEV(ndev, &mhi_dev->dev);
2341-
2342- /* All MHI net channels have 128 ring elements (at least for now) */
2343- mhi_netdev->rx_queue_sz = 128;
2344+ SET_NETDEV_DEVTYPE(ndev, &wwan_type);
2345
2346 INIT_DELAYED_WORK(&mhi_netdev->rx_refill, mhi_net_rx_refill_work);
2347 u64_stats_init(&mhi_netdev->stats.rx_syncp);
2348@@ -269,12 +331,23 @@ static int mhi_net_probe(struct mhi_device *mhi_dev,
2349 if (err)
2350 goto out_err;
2351
2352+ /* Number of transfer descriptors determines size of the queue */
2353+ mhi_netdev->rx_queue_sz = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE);
2354+
2355 err = register_netdev(ndev);
2356 if (err)
2357 goto out_err;
2358
2359+ if (mhi_netdev->proto) {
2360+ err = mhi_netdev->proto->init(mhi_netdev);
2361+ if (err)
2362+ goto out_err_proto;
2363+ }
2364+
2365 return 0;
2366
2367+out_err_proto:
2368+ unregister_netdev(ndev);
2369 out_err:
2370 free_netdev(ndev);
2371 return err;
2372@@ -288,12 +361,32 @@ static void mhi_net_remove(struct mhi_device *mhi_dev)
2373
2374 mhi_unprepare_from_transfer(mhi_netdev->mdev);
2375
2376+ if (mhi_netdev->skbagg_head)
2377+ kfree_skb(mhi_netdev->skbagg_head);
2378+
2379 free_netdev(mhi_netdev->ndev);
2380 }
2381
2382+static const struct mhi_device_info mhi_hwip0 = {
2383+ .netname = "mhi_hwip%d",
2384+};
2385+
2386+static const struct mhi_device_info mhi_swip0 = {
2387+ .netname = "mhi_swip%d",
2388+};
2389+
2390+static const struct mhi_device_info mhi_hwip0_mbim = {
2391+ .netname = "mhi_mbim%d",
2392+ .proto = &proto_mbim,
2393+};
2394+
2395 static const struct mhi_device_id mhi_net_id_table[] = {
2396- { .chan = "IP_HW0", .driver_data = (kernel_ulong_t)"mhi_hwip%d" },
2397- { .chan = "IP_SW0", .driver_data = (kernel_ulong_t)"mhi_swip%d" },
2398+ /* Hardware accelerated data PATH (to modem IPA), protocol agnostic */
2399+ { .chan = "IP_HW0", .driver_data = (kernel_ulong_t)&mhi_hwip0 },
2400+ /* Software data PATH (to modem CPU) */
2401+ { .chan = "IP_SW0", .driver_data = (kernel_ulong_t)&mhi_swip0 },
2402+ /* Hardware accelerated data PATH (to modem IPA), MBIM protocol */
2403+ { .chan = "IP_HW0_MBIM", .driver_data = (kernel_ulong_t)&mhi_hwip0_mbim },
2404 {}
2405 };
2406 MODULE_DEVICE_TABLE(mhi, mhi_net_id_table);
2407diff --git a/drivers/net/mhi/proto_mbim.c b/drivers/net/mhi/proto_mbim.c
2408new file mode 100644
2409index 0000000..fc72b3f
2410--- /dev/null
2411+++ b/drivers/net/mhi/proto_mbim.c
2412@@ -0,0 +1,303 @@
2413+// SPDX-License-Identifier: GPL-2.0-or-later
2414+/* MHI Network driver - Network over MHI bus
2415+ *
2416+ * Copyright (C) 2021 Linaro Ltd <loic.poulain@linaro.org>
2417+ *
2418+ * This driver copy some code from cdc_ncm, which is:
2419+ * Copyright (C) ST-Ericsson 2010-2012
2420+ * and cdc_mbim, which is:
2421+ * Copyright (c) 2012 Smith Micro Software, Inc.
2422+ * Copyright (c) 2012 Bjørn Mork <bjorn@mork.no>
2423+ *
2424+ */
2425+
2426+#include <linux/ethtool.h>
2427+#include <linux/if_vlan.h>
2428+#include <linux/ip.h>
2429+#include <linux/mii.h>
2430+#include <linux/netdevice.h>
2431+#include <linux/skbuff.h>
2432+#include <linux/usb.h>
2433+#include <linux/usb/cdc.h>
2434+#include <linux/usb/usbnet.h>
2435+#include <linux/usb/cdc_ncm.h>
2436+
2437+#include "mhi.h"
2438+
2439+#define MBIM_NDP16_SIGN_MASK 0x00ffffff
2440+
2441+/* Usual WWAN MTU */
2442+#define MHI_MBIM_DEFAULT_MTU 1500
2443+
2444+/* 3500 allows to optimize skb allocation, the skbs will basically fit in
2445+ * one 4K page. Large MBIM packets will simply be split over several MHI
2446+ * transfers and chained by the MHI net layer (zerocopy).
2447+ */
2448+#define MHI_MBIM_DEFAULT_MRU 3500
2449+
2450+struct mbim_context {
2451+ u16 rx_seq;
2452+ u16 tx_seq;
2453+};
2454+
2455+static void __mbim_length_errors_inc(struct mhi_net_dev *dev)
2456+{
2457+ u64_stats_update_begin(&dev->stats.rx_syncp);
2458+ u64_stats_inc(&dev->stats.rx_length_errors);
2459+ u64_stats_update_end(&dev->stats.rx_syncp);
2460+}
2461+
2462+static void __mbim_errors_inc(struct mhi_net_dev *dev)
2463+{
2464+ u64_stats_update_begin(&dev->stats.rx_syncp);
2465+ u64_stats_inc(&dev->stats.rx_errors);
2466+ u64_stats_update_end(&dev->stats.rx_syncp);
2467+}
2468+
2469+static int mbim_rx_verify_nth16(struct sk_buff *skb)
2470+{
2471+ struct mhi_net_dev *dev = netdev_priv(skb->dev);
2472+ struct mbim_context *ctx = dev->proto_data;
2473+ struct usb_cdc_ncm_nth16 *nth16;
2474+ int len;
2475+
2476+ if (skb->len < sizeof(struct usb_cdc_ncm_nth16) +
2477+ sizeof(struct usb_cdc_ncm_ndp16)) {
2478+ netif_dbg(dev, rx_err, dev->ndev, "frame too short\n");
2479+ __mbim_length_errors_inc(dev);
2480+ return -EINVAL;
2481+ }
2482+
2483+ nth16 = (struct usb_cdc_ncm_nth16 *)skb->data;
2484+
2485+ if (nth16->dwSignature != cpu_to_le32(USB_CDC_NCM_NTH16_SIGN)) {
2486+ netif_dbg(dev, rx_err, dev->ndev,
2487+ "invalid NTH16 signature <%#010x>\n",
2488+ le32_to_cpu(nth16->dwSignature));
2489+ __mbim_errors_inc(dev);
2490+ return -EINVAL;
2491+ }
2492+
2493+ /* No limit on the block length, except the size of the data pkt */
2494+ len = le16_to_cpu(nth16->wBlockLength);
2495+ if (len > skb->len) {
2496+ netif_dbg(dev, rx_err, dev->ndev,
2497+ "NTB does not fit into the skb %u/%u\n", len,
2498+ skb->len);
2499+ __mbim_length_errors_inc(dev);
2500+ return -EINVAL;
2501+ }
2502+
2503+ if (ctx->rx_seq + 1 != le16_to_cpu(nth16->wSequence) &&
2504+ (ctx->rx_seq || le16_to_cpu(nth16->wSequence)) &&
2505+ !(ctx->rx_seq == 0xffff && !le16_to_cpu(nth16->wSequence))) {
2506+ netif_dbg(dev, rx_err, dev->ndev,
2507+ "sequence number glitch prev=%d curr=%d\n",
2508+ ctx->rx_seq, le16_to_cpu(nth16->wSequence));
2509+ }
2510+ ctx->rx_seq = le16_to_cpu(nth16->wSequence);
2511+
2512+ return le16_to_cpu(nth16->wNdpIndex);
2513+}
2514+
2515+static int mbim_rx_verify_ndp16(struct sk_buff *skb, struct usb_cdc_ncm_ndp16 *ndp16)
2516+{
2517+ struct mhi_net_dev *dev = netdev_priv(skb->dev);
2518+ int ret;
2519+
2520+ if (le16_to_cpu(ndp16->wLength) < USB_CDC_NCM_NDP16_LENGTH_MIN) {
2521+ netif_dbg(dev, rx_err, dev->ndev, "invalid DPT16 length <%u>\n",
2522+ le16_to_cpu(ndp16->wLength));
2523+ return -EINVAL;
2524+ }
2525+
2526+ ret = ((le16_to_cpu(ndp16->wLength) - sizeof(struct usb_cdc_ncm_ndp16))
2527+ / sizeof(struct usb_cdc_ncm_dpe16));
2528+ ret--; /* Last entry is always a NULL terminator */
2529+
2530+ if (sizeof(struct usb_cdc_ncm_ndp16) +
2531+ ret * sizeof(struct usb_cdc_ncm_dpe16) > skb->len) {
2532+ netif_dbg(dev, rx_err, dev->ndev,
2533+ "Invalid nframes = %d\n", ret);
2534+ return -EINVAL;
2535+ }
2536+
2537+ return ret;
2538+}
2539+
2540+static void mbim_rx(struct mhi_net_dev *mhi_netdev, struct sk_buff *skb)
2541+{
2542+ struct net_device *ndev = mhi_netdev->ndev;
2543+ int ndpoffset;
2544+
2545+ /* Check NTB header and retrieve first NDP offset */
2546+ ndpoffset = mbim_rx_verify_nth16(skb);
2547+ if (ndpoffset < 0) {
2548+ net_err_ratelimited("%s: Incorrect NTB header\n", ndev->name);
2549+ goto error;
2550+ }
2551+
2552+ /* Process each NDP */
2553+ while (1) {
2554+ struct usb_cdc_ncm_ndp16 ndp16;
2555+ struct usb_cdc_ncm_dpe16 dpe16;
2556+ int nframes, n, dpeoffset;
2557+
2558+ if (skb_copy_bits(skb, ndpoffset, &ndp16, sizeof(ndp16))) {
2559+ net_err_ratelimited("%s: Incorrect NDP offset (%u)\n",
2560+ ndev->name, ndpoffset);
2561+ __mbim_length_errors_inc(mhi_netdev);
2562+ goto error;
2563+ }
2564+
2565+ /* Check NDP header and retrieve number of datagrams */
2566+ nframes = mbim_rx_verify_ndp16(skb, &ndp16);
2567+ if (nframes < 0) {
2568+ net_err_ratelimited("%s: Incorrect NDP16\n", ndev->name);
2569+ __mbim_length_errors_inc(mhi_netdev);
2570+ goto error;
2571+ }
2572+
2573+ /* Only IP data type supported, no DSS in MHI context */
2574+ if ((ndp16.dwSignature & cpu_to_le32(MBIM_NDP16_SIGN_MASK))
2575+ != cpu_to_le32(USB_CDC_MBIM_NDP16_IPS_SIGN)) {
2576+ net_err_ratelimited("%s: Unsupported NDP type\n", ndev->name);
2577+ __mbim_errors_inc(mhi_netdev);
2578+ goto next_ndp;
2579+ }
2580+
2581+ /* Only primary IP session 0 (0x00) supported for now */
2582+ if (ndp16.dwSignature & ~cpu_to_le32(MBIM_NDP16_SIGN_MASK)) {
2583+ net_err_ratelimited("%s: bad packet session\n", ndev->name);
2584+ __mbim_errors_inc(mhi_netdev);
2585+ goto next_ndp;
2586+ }
2587+
2588+ /* de-aggregate and deliver IP packets */
2589+ dpeoffset = ndpoffset + sizeof(struct usb_cdc_ncm_ndp16);
2590+ for (n = 0; n < nframes; n++, dpeoffset += sizeof(dpe16)) {
2591+ u16 dgram_offset, dgram_len;
2592+ struct sk_buff *skbn;
2593+
2594+ if (skb_copy_bits(skb, dpeoffset, &dpe16, sizeof(dpe16)))
2595+ break;
2596+
2597+ dgram_offset = le16_to_cpu(dpe16.wDatagramIndex);
2598+ dgram_len = le16_to_cpu(dpe16.wDatagramLength);
2599+
2600+ if (!dgram_offset || !dgram_len)
2601+ break; /* null terminator */
2602+
2603+ skbn = netdev_alloc_skb(ndev, dgram_len);
2604+ if (!skbn)
2605+ continue;
2606+
2607+ skb_put(skbn, dgram_len);
2608+ skb_copy_bits(skb, dgram_offset, skbn->data, dgram_len);
2609+
2610+ switch (skbn->data[0] & 0xf0) {
2611+ case 0x40:
2612+ skbn->protocol = htons(ETH_P_IP);
2613+ break;
2614+ case 0x60:
2615+ skbn->protocol = htons(ETH_P_IPV6);
2616+ break;
2617+ default:
2618+ net_err_ratelimited("%s: unknown protocol\n",
2619+ ndev->name);
2620+ __mbim_errors_inc(mhi_netdev);
2621+ dev_kfree_skb_any(skbn);
2622+ continue;
2623+ }
2624+
2625+ netif_rx(skbn);
2626+ }
2627+next_ndp:
2628+ /* Other NDP to process? */
2629+ ndpoffset = (int)le16_to_cpu(ndp16.wNextNdpIndex);
2630+ if (!ndpoffset)
2631+ break;
2632+ }
2633+
2634+ /* free skb */
2635+ dev_consume_skb_any(skb);
2636+ return;
2637+error:
2638+ dev_kfree_skb_any(skb);
2639+}
2640+
2641+struct mbim_tx_hdr {
2642+ struct usb_cdc_ncm_nth16 nth16;
2643+ struct usb_cdc_ncm_ndp16 ndp16;
2644+ struct usb_cdc_ncm_dpe16 dpe16[2];
2645+} __packed;
2646+
2647+static struct sk_buff *mbim_tx_fixup(struct mhi_net_dev *mhi_netdev,
2648+ struct sk_buff *skb)
2649+{
2650+ struct mbim_context *ctx = mhi_netdev->proto_data;
2651+ unsigned int dgram_size = skb->len;
2652+ struct usb_cdc_ncm_nth16 *nth16;
2653+ struct usb_cdc_ncm_ndp16 *ndp16;
2654+ struct mbim_tx_hdr *mbim_hdr;
2655+
2656+ /* For now, this is a partial implementation of CDC MBIM, only one NDP
2657+ * is sent, containing the IP packet (no aggregation).
2658+ */
2659+
2660+ /* Ensure we have enough headroom for crafting MBIM header */
2661+ if (skb_cow_head(skb, sizeof(struct mbim_tx_hdr))) {
2662+ dev_kfree_skb_any(skb);
2663+ return NULL;
2664+ }
2665+
2666+ mbim_hdr = skb_push(skb, sizeof(struct mbim_tx_hdr));
2667+
2668+ /* Fill NTB header */
2669+ nth16 = &mbim_hdr->nth16;
2670+ nth16->dwSignature = cpu_to_le32(USB_CDC_NCM_NTH16_SIGN);
2671+ nth16->wHeaderLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16));
2672+ nth16->wSequence = cpu_to_le16(ctx->tx_seq++);
2673+ nth16->wBlockLength = cpu_to_le16(skb->len);
2674+ nth16->wNdpIndex = cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16));
2675+
2676+ /* Fill the unique NDP */
2677+ ndp16 = &mbim_hdr->ndp16;
2678+ ndp16->dwSignature = cpu_to_le32(USB_CDC_MBIM_NDP16_IPS_SIGN);
2679+ ndp16->wLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_ndp16)
2680+ + sizeof(struct usb_cdc_ncm_dpe16) * 2);
2681+ ndp16->wNextNdpIndex = 0;
2682+
2683+ /* Datagram follows the mbim header */
2684+ ndp16->dpe16[0].wDatagramIndex = cpu_to_le16(sizeof(struct mbim_tx_hdr));
2685+ ndp16->dpe16[0].wDatagramLength = cpu_to_le16(dgram_size);
2686+
2687+ /* null termination */
2688+ ndp16->dpe16[1].wDatagramIndex = 0;
2689+ ndp16->dpe16[1].wDatagramLength = 0;
2690+
2691+ return skb;
2692+}
2693+
2694+static int mbim_init(struct mhi_net_dev *mhi_netdev)
2695+{
2696+ struct net_device *ndev = mhi_netdev->ndev;
2697+
2698+ mhi_netdev->proto_data = devm_kzalloc(&ndev->dev,
2699+ sizeof(struct mbim_context),
2700+ GFP_KERNEL);
2701+ if (!mhi_netdev->proto_data)
2702+ return -ENOMEM;
2703+
2704+ ndev->needed_headroom = sizeof(struct mbim_tx_hdr);
2705+ ndev->mtu = MHI_MBIM_DEFAULT_MTU;
2706+ mhi_netdev->mru = MHI_MBIM_DEFAULT_MRU;
2707+
2708+ return 0;
2709+}
2710+
2711+const struct mhi_net_proto proto_mbim = {
2712+ .init = mbim_init,
2713+ .rx = mbim_rx,
2714+ .tx_fixup = mbim_tx_fixup,
2715+};
2716diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
2717new file mode 100644
2718index 0000000..7ad1920
2719--- /dev/null
2720+++ b/drivers/net/wwan/Kconfig
2721@@ -0,0 +1,37 @@
2722+# SPDX-License-Identifier: GPL-2.0-only
2723+#
2724+# Wireless WAN device configuration
2725+#
2726+
2727+menuconfig WWAN
2728+ bool "Wireless WAN"
2729+ help
2730+ This section contains Wireless WAN configuration for WWAN framework
2731+ and drivers.
2732+
2733+if WWAN
2734+
2735+config WWAN_CORE
2736+ tristate "WWAN Driver Core"
2737+ help
2738+ Say Y here if you want to use the WWAN driver core. This driver
2739+ provides a common framework for WWAN drivers.
2740+
2741+ To compile this driver as a module, choose M here: the module will be
2742+ called wwan.
2743+
2744+config MHI_WWAN_CTRL
2745+ tristate "MHI WWAN control driver for QCOM-based PCIe modems"
2746+ select WWAN_CORE
2747+ depends on MHI_BUS
2748+ help
2749+ MHI WWAN CTRL allows QCOM-based PCIe modems to expose different modem
2750+ control protocols/ports to userspace, including AT, MBIM, QMI, DIAG
2751+ and FIREHOSE. These protocols can be accessed directly from userspace
2752+ (e.g. AT commands) or via libraries/tools (e.g. libmbim, libqmi,
2753+ libqcdm...).
2754+
2755+ To compile this driver as a module, choose M here: the module will be
2756+ called mhi_wwan_ctrl.
2757+
2758+endif # WWAN
2759diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile
2760new file mode 100644
2761index 0000000..556cd90
2762--- /dev/null
2763+++ b/drivers/net/wwan/Makefile
2764@@ -0,0 +1,9 @@
2765+# SPDX-License-Identifier: GPL-2.0
2766+#
2767+# Makefile for the Linux WWAN device drivers.
2768+#
2769+
2770+obj-$(CONFIG_WWAN_CORE) += wwan.o
2771+wwan-objs += wwan_core.o
2772+
2773+obj-$(CONFIG_MHI_WWAN_CTRL) += mhi_wwan_ctrl.o
2774diff --git a/drivers/net/wwan/mhi_wwan_ctrl.c b/drivers/net/wwan/mhi_wwan_ctrl.c
2775new file mode 100644
2776index 0000000..1bc6b69
2777--- /dev/null
2778+++ b/drivers/net/wwan/mhi_wwan_ctrl.c
2779@@ -0,0 +1,284 @@
2780+// SPDX-License-Identifier: GPL-2.0-only
2781+/* Copyright (c) 2021, Linaro Ltd <loic.poulain@linaro.org> */
2782+#include <linux/kernel.h>
2783+#include <linux/mhi.h>
2784+#include <linux/mod_devicetable.h>
2785+#include <linux/module.h>
2786+#include <linux/wwan.h>
2787+
2788+/* MHI wwan flags */
2789+enum mhi_wwan_flags {
2790+ MHI_WWAN_DL_CAP,
2791+ MHI_WWAN_UL_CAP,
2792+ MHI_WWAN_RX_REFILL,
2793+};
2794+
2795+#define MHI_WWAN_MAX_MTU 0x8000
2796+
2797+struct mhi_wwan_dev {
2798+ /* Lower level is a mhi dev, upper level is a wwan port */
2799+ struct mhi_device *mhi_dev;
2800+ struct wwan_port *wwan_port;
2801+
2802+ /* State and capabilities */
2803+ unsigned long flags;
2804+ size_t mtu;
2805+
2806+ /* Protect against concurrent TX and TX-completion (bh) */
2807+ spinlock_t tx_lock;
2808+
2809+ /* Protect RX budget and rx_refill scheduling */
2810+ spinlock_t rx_lock;
2811+ struct work_struct rx_refill;
2812+
2813+ /* RX budget is initially set to the size of the MHI RX queue and is
2814+ * used to limit the number of allocated and queued packets. It is
2815+ * decremented on data queueing and incremented on data release.
2816+ */
2817+ unsigned int rx_budget;
2818+};
2819+
2820+/* Increment RX budget and schedule RX refill if necessary */
2821+static void mhi_wwan_rx_budget_inc(struct mhi_wwan_dev *mhiwwan)
2822+{
2823+ spin_lock(&mhiwwan->rx_lock);
2824+
2825+ mhiwwan->rx_budget++;
2826+
2827+ if (test_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags))
2828+ schedule_work(&mhiwwan->rx_refill);
2829+
2830+ spin_unlock(&mhiwwan->rx_lock);
2831+}
2832+
2833+/* Decrement RX budget if non-zero and return true on success */
2834+static bool mhi_wwan_rx_budget_dec(struct mhi_wwan_dev *mhiwwan)
2835+{
2836+ bool ret = false;
2837+
2838+ spin_lock(&mhiwwan->rx_lock);
2839+
2840+ if (mhiwwan->rx_budget) {
2841+ mhiwwan->rx_budget--;
2842+ if (test_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags))
2843+ ret = true;
2844+ }
2845+
2846+ spin_unlock(&mhiwwan->rx_lock);
2847+
2848+ return ret;
2849+}
2850+
2851+static void __mhi_skb_destructor(struct sk_buff *skb)
2852+{
2853+ /* RX buffer has been consumed, increase the allowed budget */
2854+ mhi_wwan_rx_budget_inc(skb_shinfo(skb)->destructor_arg);
2855+}
2856+
2857+static void mhi_wwan_ctrl_refill_work(struct work_struct *work)
2858+{
2859+ struct mhi_wwan_dev *mhiwwan = container_of(work, struct mhi_wwan_dev, rx_refill);
2860+ struct mhi_device *mhi_dev = mhiwwan->mhi_dev;
2861+
2862+ while (mhi_wwan_rx_budget_dec(mhiwwan)) {
2863+ struct sk_buff *skb;
2864+
2865+ skb = alloc_skb(mhiwwan->mtu, GFP_KERNEL);
2866+ if (!skb) {
2867+ mhi_wwan_rx_budget_inc(mhiwwan);
2868+ break;
2869+ }
2870+
2871+ /* To prevent unlimited buffer allocation if nothing consumes
2872+ * the RX buffers (passed to WWAN core), track their lifespan
2873+ * to not allocate more than allowed budget.
2874+ */
2875+ skb->destructor = __mhi_skb_destructor;
2876+ skb_shinfo(skb)->destructor_arg = mhiwwan;
2877+
2878+ if (mhi_queue_skb(mhi_dev, DMA_FROM_DEVICE, skb, mhiwwan->mtu, MHI_EOT)) {
2879+ dev_err(&mhi_dev->dev, "Failed to queue buffer\n");
2880+ kfree_skb(skb);
2881+ break;
2882+ }
2883+ }
2884+}
2885+
2886+static int mhi_wwan_ctrl_start(struct wwan_port *port)
2887+{
2888+ struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port);
2889+ int ret;
2890+
2891+ /* Start mhi device's channel(s) */
2892+ ret = mhi_prepare_for_transfer(mhiwwan->mhi_dev);
2893+ if (ret)
2894+ return ret;
2895+
2896+ /* Don't allocate more buffers than MHI channel queue size */
2897+ mhiwwan->rx_budget = mhi_get_free_desc_count(mhiwwan->mhi_dev, DMA_FROM_DEVICE);
2898+
2899+ /* Add buffers to the MHI inbound queue */
2900+ if (test_bit(MHI_WWAN_DL_CAP, &mhiwwan->flags)) {
2901+ set_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags);
2902+ mhi_wwan_ctrl_refill_work(&mhiwwan->rx_refill);
2903+ }
2904+
2905+ return 0;
2906+}
2907+
2908+static void mhi_wwan_ctrl_stop(struct wwan_port *port)
2909+{
2910+ struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port);
2911+
2912+ spin_lock(&mhiwwan->rx_lock);
2913+ clear_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags);
2914+ spin_unlock(&mhiwwan->rx_lock);
2915+
2916+ cancel_work_sync(&mhiwwan->rx_refill);
2917+
2918+ mhi_unprepare_from_transfer(mhiwwan->mhi_dev);
2919+}
2920+
2921+static int mhi_wwan_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
2922+{
2923+ struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port);
2924+ int ret;
2925+
2926+ if (skb->len > mhiwwan->mtu)
2927+ return -EMSGSIZE;
2928+
2929+ if (!test_bit(MHI_WWAN_UL_CAP, &mhiwwan->flags))
2930+ return -EOPNOTSUPP;
2931+
2932+ /* Queue the packet for MHI transfer and check fullness of the queue */
2933+ spin_lock_bh(&mhiwwan->tx_lock);
2934+ ret = mhi_queue_skb(mhiwwan->mhi_dev, DMA_TO_DEVICE, skb, skb->len, MHI_EOT);
2935+ if (mhi_queue_is_full(mhiwwan->mhi_dev, DMA_TO_DEVICE))
2936+ wwan_port_txoff(port);
2937+ spin_unlock_bh(&mhiwwan->tx_lock);
2938+
2939+ return ret;
2940+}
2941+
2942+static const struct wwan_port_ops wwan_pops = {
2943+ .start = mhi_wwan_ctrl_start,
2944+ .stop = mhi_wwan_ctrl_stop,
2945+ .tx = mhi_wwan_ctrl_tx,
2946+};
2947+
2948+static void mhi_ul_xfer_cb(struct mhi_device *mhi_dev,
2949+ struct mhi_result *mhi_result)
2950+{
2951+ struct mhi_wwan_dev *mhiwwan = dev_get_drvdata(&mhi_dev->dev);
2952+ struct wwan_port *port = mhiwwan->wwan_port;
2953+ struct sk_buff *skb = mhi_result->buf_addr;
2954+
2955+ dev_dbg(&mhi_dev->dev, "%s: status: %d xfer_len: %zu\n", __func__,
2956+ mhi_result->transaction_status, mhi_result->bytes_xferd);
2957+
2958+ /* MHI core has done with the buffer, release it */
2959+ consume_skb(skb);
2960+
2961+ /* There is likely new slot available in the MHI queue, re-allow TX */
2962+ spin_lock_bh(&mhiwwan->tx_lock);
2963+ if (!mhi_queue_is_full(mhiwwan->mhi_dev, DMA_TO_DEVICE))
2964+ wwan_port_txon(port);
2965+ spin_unlock_bh(&mhiwwan->tx_lock);
2966+}
2967+
2968+static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev,
2969+ struct mhi_result *mhi_result)
2970+{
2971+ struct mhi_wwan_dev *mhiwwan = dev_get_drvdata(&mhi_dev->dev);
2972+ struct wwan_port *port = mhiwwan->wwan_port;
2973+ struct sk_buff *skb = mhi_result->buf_addr;
2974+
2975+ dev_dbg(&mhi_dev->dev, "%s: status: %d receive_len: %zu\n", __func__,
2976+ mhi_result->transaction_status, mhi_result->bytes_xferd);
2977+
2978+ if (mhi_result->transaction_status &&
2979+ mhi_result->transaction_status != -EOVERFLOW) {
2980+ kfree_skb(skb);
2981+ return;
2982+ }
2983+
2984+ /* MHI core does not update skb->len, do it before forward */
2985+ skb_put(skb, mhi_result->bytes_xferd);
2986+ wwan_port_rx(port, skb);
2987+
2988+ /* Do not increment rx budget nor refill RX buffers now, wait for the
2989+ * buffer to be consumed. Done from __mhi_skb_destructor().
2990+ */
2991+}
2992+
2993+static int mhi_wwan_ctrl_probe(struct mhi_device *mhi_dev,
2994+ const struct mhi_device_id *id)
2995+{
2996+ struct mhi_controller *cntrl = mhi_dev->mhi_cntrl;
2997+ struct mhi_wwan_dev *mhiwwan;
2998+ struct wwan_port *port;
2999+
3000+ mhiwwan = kzalloc(sizeof(*mhiwwan), GFP_KERNEL);
3001+ if (!mhiwwan)
3002+ return -ENOMEM;
3003+
3004+ mhiwwan->mhi_dev = mhi_dev;
3005+ mhiwwan->mtu = MHI_WWAN_MAX_MTU;
3006+ INIT_WORK(&mhiwwan->rx_refill, mhi_wwan_ctrl_refill_work);
3007+ spin_lock_init(&mhiwwan->tx_lock);
3008+ spin_lock_init(&mhiwwan->rx_lock);
3009+
3010+ if (mhi_dev->dl_chan)
3011+ set_bit(MHI_WWAN_DL_CAP, &mhiwwan->flags);
3012+ if (mhi_dev->ul_chan)
3013+ set_bit(MHI_WWAN_UL_CAP, &mhiwwan->flags);
3014+
3015+ dev_set_drvdata(&mhi_dev->dev, mhiwwan);
3016+
3017+ /* Register as a wwan port, id->driver_data contains wwan port type */
3018+ port = wwan_create_port(&cntrl->mhi_dev->dev, id->driver_data,
3019+ &wwan_pops, mhiwwan);
3020+ if (IS_ERR(port)) {
3021+ kfree(mhiwwan);
3022+ return PTR_ERR(port);
3023+ }
3024+
3025+ mhiwwan->wwan_port = port;
3026+
3027+ return 0;
3028+};
3029+
3030+static void mhi_wwan_ctrl_remove(struct mhi_device *mhi_dev)
3031+{
3032+ struct mhi_wwan_dev *mhiwwan = dev_get_drvdata(&mhi_dev->dev);
3033+
3034+ wwan_remove_port(mhiwwan->wwan_port);
3035+ kfree(mhiwwan);
3036+}
3037+
3038+static const struct mhi_device_id mhi_wwan_ctrl_match_table[] = {
3039+ { .chan = "DUN", .driver_data = WWAN_PORT_AT },
3040+ { .chan = "MBIM", .driver_data = WWAN_PORT_MBIM },
3041+ { .chan = "QMI", .driver_data = WWAN_PORT_QMI },
3042+ { .chan = "DIAG", .driver_data = WWAN_PORT_QCDM },
3043+ { .chan = "FIREHOSE", .driver_data = WWAN_PORT_FIREHOSE },
3044+ {},
3045+};
3046+MODULE_DEVICE_TABLE(mhi, mhi_wwan_ctrl_match_table);
3047+
3048+static struct mhi_driver mhi_wwan_ctrl_driver = {
3049+ .id_table = mhi_wwan_ctrl_match_table,
3050+ .remove = mhi_wwan_ctrl_remove,
3051+ .probe = mhi_wwan_ctrl_probe,
3052+ .ul_xfer_cb = mhi_ul_xfer_cb,
3053+ .dl_xfer_cb = mhi_dl_xfer_cb,
3054+ .driver = {
3055+ .name = "mhi_wwan_ctrl",
3056+ },
3057+};
3058+
3059+module_mhi_driver(mhi_wwan_ctrl_driver);
3060+
3061+MODULE_LICENSE("GPL v2");
3062+MODULE_DESCRIPTION("MHI WWAN CTRL Driver");
3063+MODULE_AUTHOR("Loic Poulain <loic.poulain@linaro.org>");
3064diff --git a/drivers/net/wwan/wwan_core.c b/drivers/net/wwan/wwan_core.c
3065new file mode 100644
3066index 0000000..cff04e5
3067--- /dev/null
3068+++ b/drivers/net/wwan/wwan_core.c
3069@@ -0,0 +1,554 @@
3070+// SPDX-License-Identifier: GPL-2.0-only
3071+/* Copyright (c) 2021, Linaro Ltd <loic.poulain@linaro.org> */
3072+
3073+#include <linux/err.h>
3074+#include <linux/errno.h>
3075+#include <linux/fs.h>
3076+#include <linux/init.h>
3077+#include <linux/idr.h>
3078+#include <linux/kernel.h>
3079+#include <linux/module.h>
3080+#include <linux/poll.h>
3081+#include <linux/skbuff.h>
3082+#include <linux/slab.h>
3083+#include <linux/types.h>
3084+#include <linux/wwan.h>
3085+
3086+#define WWAN_MAX_MINORS 256 /* 256 minors allowed with register_chrdev() */
3087+
3088+static DEFINE_MUTEX(wwan_register_lock); /* WWAN device create|remove lock */
3089+static DEFINE_IDA(minors); /* minors for WWAN port chardevs */
3090+static DEFINE_IDA(wwan_dev_ids); /* for unique WWAN device IDs */
3091+static struct class *wwan_class;
3092+static int wwan_major;
3093+
3094+#define to_wwan_dev(d) container_of(d, struct wwan_device, dev)
3095+#define to_wwan_port(d) container_of(d, struct wwan_port, dev)
3096+
3097+/* WWAN port flags */
3098+#define WWAN_PORT_TX_OFF 0
3099+
3100+/**
3101+ * struct wwan_device - The structure that defines a WWAN device
3102+ *
3103+ * @id: WWAN device unique ID.
3104+ * @dev: Underlying device.
3105+ * @port_id: Current available port ID to pick.
3106+ */
3107+struct wwan_device {
3108+ unsigned int id;
3109+ struct device dev;
3110+ atomic_t port_id;
3111+};
3112+
3113+/**
3114+ * struct wwan_port - The structure that defines a WWAN port
3115+ * @type: Port type
3116+ * @start_count: Port start counter
3117+ * @flags: Store port state and capabilities
3118+ * @ops: Pointer to WWAN port operations
3119+ * @ops_lock: Protect port ops
3120+ * @dev: Underlying device
3121+ * @rxq: Buffer inbound queue
3122+ * @waitqueue: The waitqueue for port fops (read/write/poll)
3123+ */
3124+struct wwan_port {
3125+ enum wwan_port_type type;
3126+ unsigned int start_count;
3127+ unsigned long flags;
3128+ const struct wwan_port_ops *ops;
3129+ struct mutex ops_lock; /* Serialize ops + protect against removal */
3130+ struct device dev;
3131+ struct sk_buff_head rxq;
3132+ wait_queue_head_t waitqueue;
3133+};
3134+
3135+static void wwan_dev_destroy(struct device *dev)
3136+{
3137+ struct wwan_device *wwandev = to_wwan_dev(dev);
3138+
3139+ ida_free(&wwan_dev_ids, wwandev->id);
3140+ kfree(wwandev);
3141+}
3142+
3143+static const struct device_type wwan_dev_type = {
3144+ .name = "wwan_dev",
3145+ .release = wwan_dev_destroy,
3146+};
3147+
3148+static int wwan_dev_parent_match(struct device *dev, const void *parent)
3149+{
3150+ return (dev->type == &wwan_dev_type && dev->parent == parent);
3151+}
3152+
3153+static struct wwan_device *wwan_dev_get_by_parent(struct device *parent)
3154+{
3155+ struct device *dev;
3156+
3157+ dev = class_find_device(wwan_class, NULL, parent, wwan_dev_parent_match);
3158+ if (!dev)
3159+ return ERR_PTR(-ENODEV);
3160+
3161+ return to_wwan_dev(dev);
3162+}
3163+
3164+/* This function allocates and registers a new WWAN device OR if a WWAN device
3165+ * already exist for the given parent, it gets a reference and return it.
3166+ * This function is not exported (for now), it is called indirectly via
3167+ * wwan_create_port().
3168+ */
3169+static struct wwan_device *wwan_create_dev(struct device *parent)
3170+{
3171+ struct wwan_device *wwandev;
3172+ int err, id;
3173+
3174+ /* The 'find-alloc-register' operation must be protected against
3175+ * concurrent execution, a WWAN device is possibly shared between
3176+ * multiple callers or concurrently unregistered from wwan_remove_dev().
3177+ */
3178+ mutex_lock(&wwan_register_lock);
3179+
3180+ /* If wwandev already exists, return it */
3181+ wwandev = wwan_dev_get_by_parent(parent);
3182+ if (!IS_ERR(wwandev))
3183+ goto done_unlock;
3184+
3185+ id = ida_alloc(&wwan_dev_ids, GFP_KERNEL);
3186+ if (id < 0)
3187+ goto done_unlock;
3188+
3189+ wwandev = kzalloc(sizeof(*wwandev), GFP_KERNEL);
3190+ if (!wwandev) {
3191+ ida_free(&wwan_dev_ids, id);
3192+ goto done_unlock;
3193+ }
3194+
3195+ wwandev->dev.parent = parent;
3196+ wwandev->dev.class = wwan_class;
3197+ wwandev->dev.type = &wwan_dev_type;
3198+ wwandev->id = id;
3199+ dev_set_name(&wwandev->dev, "wwan%d", wwandev->id);
3200+
3201+ err = device_register(&wwandev->dev);
3202+ if (err) {
3203+ put_device(&wwandev->dev);
3204+ wwandev = NULL;
3205+ }
3206+
3207+done_unlock:
3208+ mutex_unlock(&wwan_register_lock);
3209+
3210+ return wwandev;
3211+}
3212+
3213+static int is_wwan_child(struct device *dev, void *data)
3214+{
3215+ return dev->class == wwan_class;
3216+}
3217+
3218+static void wwan_remove_dev(struct wwan_device *wwandev)
3219+{
3220+ int ret;
3221+
3222+ /* Prevent concurrent picking from wwan_create_dev */
3223+ mutex_lock(&wwan_register_lock);
3224+
3225+ /* WWAN device is created and registered (get+add) along with its first
3226+ * child port, and subsequent port registrations only grab a reference
3227+ * (get). The WWAN device must then be unregistered (del+put) along with
3228+ * its latest port, and reference simply dropped (put) otherwise.
3229+ */
3230+ ret = device_for_each_child(&wwandev->dev, NULL, is_wwan_child);
3231+ if (!ret)
3232+ device_unregister(&wwandev->dev);
3233+ else
3234+ put_device(&wwandev->dev);
3235+
3236+ mutex_unlock(&wwan_register_lock);
3237+}
3238+
3239+/* ------- WWAN port management ------- */
3240+
3241+static void wwan_port_destroy(struct device *dev)
3242+{
3243+ struct wwan_port *port = to_wwan_port(dev);
3244+
3245+ ida_free(&minors, MINOR(port->dev.devt));
3246+ skb_queue_purge(&port->rxq);
3247+ mutex_destroy(&port->ops_lock);
3248+ kfree(port);
3249+}
3250+
3251+static const struct device_type wwan_port_dev_type = {
3252+ .name = "wwan_port",
3253+ .release = wwan_port_destroy,
3254+};
3255+
3256+static int wwan_port_minor_match(struct device *dev, const void *minor)
3257+{
3258+ return (dev->type == &wwan_port_dev_type &&
3259+ MINOR(dev->devt) == *(unsigned int *)minor);
3260+}
3261+
3262+static struct wwan_port *wwan_port_get_by_minor(unsigned int minor)
3263+{
3264+ struct device *dev;
3265+
3266+ dev = class_find_device(wwan_class, NULL, &minor, wwan_port_minor_match);
3267+ if (!dev)
3268+ return ERR_PTR(-ENODEV);
3269+
3270+ return to_wwan_port(dev);
3271+}
3272+
3273+/* Keep aligned with wwan_port_type enum */
3274+static const char * const wwan_port_type_str[] = {
3275+ "AT",
3276+ "MBIM",
3277+ "QMI",
3278+ "QCDM",
3279+ "FIREHOSE"
3280+};
3281+
3282+struct wwan_port *wwan_create_port(struct device *parent,
3283+ enum wwan_port_type type,
3284+ const struct wwan_port_ops *ops,
3285+ void *drvdata)
3286+{
3287+ struct wwan_device *wwandev;
3288+ struct wwan_port *port;
3289+ int minor, err = -ENOMEM;
3290+
3291+ if (type >= WWAN_PORT_MAX || !ops)
3292+ return ERR_PTR(-EINVAL);
3293+
3294+ /* A port is always a child of a WWAN device, retrieve (allocate or
3295+ * pick) the WWAN device based on the provided parent device.
3296+ */
3297+ wwandev = wwan_create_dev(parent);
3298+ if (IS_ERR(wwandev))
3299+ return ERR_CAST(wwandev);
3300+
3301+ /* A port is exposed as character device, get a minor */
3302+ minor = ida_alloc_range(&minors, 0, WWAN_MAX_MINORS - 1, GFP_KERNEL);
3303+ if (minor < 0)
3304+ goto error_wwandev_remove;
3305+
3306+ port = kzalloc(sizeof(*port), GFP_KERNEL);
3307+ if (!port) {
3308+ ida_free(&minors, minor);
3309+ goto error_wwandev_remove;
3310+ }
3311+
3312+ port->type = type;
3313+ port->ops = ops;
3314+ mutex_init(&port->ops_lock);
3315+ skb_queue_head_init(&port->rxq);
3316+ init_waitqueue_head(&port->waitqueue);
3317+
3318+ port->dev.parent = &wwandev->dev;
3319+ port->dev.class = wwan_class;
3320+ port->dev.type = &wwan_port_dev_type;
3321+ port->dev.devt = MKDEV(wwan_major, minor);
3322+ dev_set_drvdata(&port->dev, drvdata);
3323+
3324+ /* create unique name based on wwan device id, port index and type */
3325+ dev_set_name(&port->dev, "wwan%up%u%s", wwandev->id,
3326+ atomic_inc_return(&wwandev->port_id),
3327+ wwan_port_type_str[port->type]);
3328+
3329+ err = device_register(&port->dev);
3330+ if (err)
3331+ goto error_put_device;
3332+
3333+ return port;
3334+
3335+error_put_device:
3336+ put_device(&port->dev);
3337+error_wwandev_remove:
3338+ wwan_remove_dev(wwandev);
3339+
3340+ return ERR_PTR(err);
3341+}
3342+EXPORT_SYMBOL_GPL(wwan_create_port);
3343+
3344+void wwan_remove_port(struct wwan_port *port)
3345+{
3346+ struct wwan_device *wwandev = to_wwan_dev(port->dev.parent);
3347+
3348+ mutex_lock(&port->ops_lock);
3349+ if (port->start_count)
3350+ port->ops->stop(port);
3351+ port->ops = NULL; /* Prevent any new port operations (e.g. from fops) */
3352+ mutex_unlock(&port->ops_lock);
3353+
3354+ wake_up_interruptible(&port->waitqueue);
3355+
3356+ skb_queue_purge(&port->rxq);
3357+ dev_set_drvdata(&port->dev, NULL);
3358+ device_unregister(&port->dev);
3359+
3360+ /* Release related wwan device */
3361+ wwan_remove_dev(wwandev);
3362+}
3363+EXPORT_SYMBOL_GPL(wwan_remove_port);
3364+
3365+void wwan_port_rx(struct wwan_port *port, struct sk_buff *skb)
3366+{
3367+ skb_queue_tail(&port->rxq, skb);
3368+ wake_up_interruptible(&port->waitqueue);
3369+}
3370+EXPORT_SYMBOL_GPL(wwan_port_rx);
3371+
3372+void wwan_port_txon(struct wwan_port *port)
3373+{
3374+ clear_bit(WWAN_PORT_TX_OFF, &port->flags);
3375+ wake_up_interruptible(&port->waitqueue);
3376+}
3377+EXPORT_SYMBOL_GPL(wwan_port_txon);
3378+
3379+void wwan_port_txoff(struct wwan_port *port)
3380+{
3381+ set_bit(WWAN_PORT_TX_OFF, &port->flags);
3382+}
3383+EXPORT_SYMBOL_GPL(wwan_port_txoff);
3384+
3385+void *wwan_port_get_drvdata(struct wwan_port *port)
3386+{
3387+ return dev_get_drvdata(&port->dev);
3388+}
3389+EXPORT_SYMBOL_GPL(wwan_port_get_drvdata);
3390+
3391+static int wwan_port_op_start(struct wwan_port *port)
3392+{
3393+ int ret = 0;
3394+
3395+ mutex_lock(&port->ops_lock);
3396+ if (!port->ops) { /* Port got unplugged */
3397+ ret = -ENODEV;
3398+ goto out_unlock;
3399+ }
3400+
3401+ /* If port is already started, don't start again */
3402+ if (!port->start_count)
3403+ ret = port->ops->start(port);
3404+
3405+ if (!ret)
3406+ port->start_count++;
3407+
3408+out_unlock:
3409+ mutex_unlock(&port->ops_lock);
3410+
3411+ return ret;
3412+}
3413+
3414+static void wwan_port_op_stop(struct wwan_port *port)
3415+{
3416+ mutex_lock(&port->ops_lock);
3417+ port->start_count--;
3418+ if (port->ops && !port->start_count)
3419+ port->ops->stop(port);
3420+ mutex_unlock(&port->ops_lock);
3421+}
3422+
3423+static int wwan_port_op_tx(struct wwan_port *port, struct sk_buff *skb)
3424+{
3425+ int ret;
3426+
3427+ mutex_lock(&port->ops_lock);
3428+ if (!port->ops) { /* Port got unplugged */
3429+ ret = -ENODEV;
3430+ goto out_unlock;
3431+ }
3432+
3433+ ret = port->ops->tx(port, skb);
3434+
3435+out_unlock:
3436+ mutex_unlock(&port->ops_lock);
3437+
3438+ return ret;
3439+}
3440+
3441+static bool is_read_blocked(struct wwan_port *port)
3442+{
3443+ return skb_queue_empty(&port->rxq) && port->ops;
3444+}
3445+
3446+static bool is_write_blocked(struct wwan_port *port)
3447+{
3448+ return test_bit(WWAN_PORT_TX_OFF, &port->flags) && port->ops;
3449+}
3450+
3451+static int wwan_wait_rx(struct wwan_port *port, bool nonblock)
3452+{
3453+ if (!is_read_blocked(port))
3454+ return 0;
3455+
3456+ if (nonblock)
3457+ return -EAGAIN;
3458+
3459+ if (wait_event_interruptible(port->waitqueue, !is_read_blocked(port)))
3460+ return -ERESTARTSYS;
3461+
3462+ return 0;
3463+}
3464+
3465+static int wwan_wait_tx(struct wwan_port *port, bool nonblock)
3466+{
3467+ if (!is_write_blocked(port))
3468+ return 0;
3469+
3470+ if (nonblock)
3471+ return -EAGAIN;
3472+
3473+ if (wait_event_interruptible(port->waitqueue, !is_write_blocked(port)))
3474+ return -ERESTARTSYS;
3475+
3476+ return 0;
3477+}
3478+
3479+static int wwan_port_fops_open(struct inode *inode, struct file *file)
3480+{
3481+ struct wwan_port *port;
3482+ int err = 0;
3483+
3484+ port = wwan_port_get_by_minor(iminor(inode));
3485+ if (IS_ERR(port))
3486+ return PTR_ERR(port);
3487+
3488+ file->private_data = port;
3489+ stream_open(inode, file);
3490+
3491+ err = wwan_port_op_start(port);
3492+ if (err)
3493+ put_device(&port->dev);
3494+
3495+ return err;
3496+}
3497+
3498+static int wwan_port_fops_release(struct inode *inode, struct file *filp)
3499+{
3500+ struct wwan_port *port = filp->private_data;
3501+
3502+ wwan_port_op_stop(port);
3503+ put_device(&port->dev);
3504+
3505+ return 0;
3506+}
3507+
3508+static ssize_t wwan_port_fops_read(struct file *filp, char __user *buf,
3509+ size_t count, loff_t *ppos)
3510+{
3511+ struct wwan_port *port = filp->private_data;
3512+ struct sk_buff *skb;
3513+ size_t copied;
3514+ int ret;
3515+
3516+ ret = wwan_wait_rx(port, !!(filp->f_flags & O_NONBLOCK));
3517+ if (ret)
3518+ return ret;
3519+
3520+ skb = skb_dequeue(&port->rxq);
3521+ if (!skb)
3522+ return -EIO;
3523+
3524+ copied = min_t(size_t, count, skb->len);
3525+ if (copy_to_user(buf, skb->data, copied)) {
3526+ kfree_skb(skb);
3527+ return -EFAULT;
3528+ }
3529+ skb_pull(skb, copied);
3530+
3531+ /* skb is not fully consumed, keep it in the queue */
3532+ if (skb->len)
3533+ skb_queue_head(&port->rxq, skb);
3534+ else
3535+ consume_skb(skb);
3536+
3537+ return copied;
3538+}
3539+
3540+static ssize_t wwan_port_fops_write(struct file *filp, const char __user *buf,
3541+ size_t count, loff_t *offp)
3542+{
3543+ struct wwan_port *port = filp->private_data;
3544+ struct sk_buff *skb;
3545+ int ret;
3546+
3547+ ret = wwan_wait_tx(port, !!(filp->f_flags & O_NONBLOCK));
3548+ if (ret)
3549+ return ret;
3550+
3551+ skb = alloc_skb(count, GFP_KERNEL);
3552+ if (!skb)
3553+ return -ENOMEM;
3554+
3555+ if (copy_from_user(skb_put(skb, count), buf, count)) {
3556+ kfree_skb(skb);
3557+ return -EFAULT;
3558+ }
3559+
3560+ ret = wwan_port_op_tx(port, skb);
3561+ if (ret) {
3562+ kfree_skb(skb);
3563+ return ret;
3564+ }
3565+
3566+ return count;
3567+}
3568+
3569+static __poll_t wwan_port_fops_poll(struct file *filp, poll_table *wait)
3570+{
3571+ struct wwan_port *port = filp->private_data;
3572+ __poll_t mask = 0;
3573+
3574+ poll_wait(filp, &port->waitqueue, wait);
3575+
3576+ if (!is_write_blocked(port))
3577+ mask |= EPOLLOUT | EPOLLWRNORM;
3578+ if (!is_read_blocked(port))
3579+ mask |= EPOLLIN | EPOLLRDNORM;
3580+ if (!port->ops)
3581+ mask |= EPOLLHUP | EPOLLERR;
3582+
3583+ return mask;
3584+}
3585+
3586+static const struct file_operations wwan_port_fops = {
3587+ .owner = THIS_MODULE,
3588+ .open = wwan_port_fops_open,
3589+ .release = wwan_port_fops_release,
3590+ .read = wwan_port_fops_read,
3591+ .write = wwan_port_fops_write,
3592+ .poll = wwan_port_fops_poll,
3593+ .llseek = noop_llseek,
3594+};
3595+
3596+static int __init wwan_init(void)
3597+{
3598+ wwan_class = class_create(THIS_MODULE, "wwan");
3599+ if (IS_ERR(wwan_class))
3600+ return PTR_ERR(wwan_class);
3601+
3602+ /* chrdev used for wwan ports */
3603+ wwan_major = register_chrdev(0, "wwan_port", &wwan_port_fops);
3604+ if (wwan_major < 0) {
3605+ class_destroy(wwan_class);
3606+ return wwan_major;
3607+ }
3608+
3609+ return 0;
3610+}
3611+
3612+static void __exit wwan_exit(void)
3613+{
3614+ unregister_chrdev(wwan_major, "wwan_port");
3615+ class_destroy(wwan_class);
3616+}
3617+
3618+module_init(wwan_init);
3619+module_exit(wwan_exit);
3620+
3621+MODULE_AUTHOR("Loic Poulain <loic.poulain@linaro.org>");
3622+MODULE_DESCRIPTION("WWAN core");
3623+MODULE_LICENSE("GPL v2");
3624diff --git a/include/linux/mhi.h b/include/linux/mhi.h
3625index 562862f..d095fba 100644
3626--- a/include/linux/mhi.h
3627+++ b/include/linux/mhi.h
3628@@ -117,6 +117,7 @@ struct mhi_link_info {
3629 * @MHI_EE_WFW: WLAN firmware mode
3630 * @MHI_EE_PTHRU: Passthrough
3631 * @MHI_EE_EDL: Embedded downloader
3632+ * @MHI_EE_FP: Flash Programmer Environment
3633 */
3634 enum mhi_ee_type {
3635 MHI_EE_PBL,
3636@@ -126,7 +127,8 @@ enum mhi_ee_type {
3637 MHI_EE_WFW,
3638 MHI_EE_PTHRU,
3639 MHI_EE_EDL,
3640- MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
3641+ MHI_EE_FP,
3642+ MHI_EE_MAX_SUPPORTED = MHI_EE_FP,
3643 MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
3644 MHI_EE_NOT_SUPPORTED,
3645 MHI_EE_MAX,
3646@@ -279,7 +281,7 @@ struct mhi_controller_config {
3647 u32 num_channels;
3648 const struct mhi_channel_config *ch_cfg;
3649 u32 num_events;
3650- const struct mhi_event_config *event_cfg;
3651+ struct mhi_event_config *event_cfg;
3652 bool use_bounce_buf;
3653 bool m2_no_db;
3654 };
3655@@ -296,7 +298,7 @@ struct mhi_controller_config {
3656 * @wake_db: MHI WAKE doorbell register address
3657 * @iova_start: IOMMU starting address for data (required)
3658 * @iova_stop: IOMMU stop address for data (required)
3659- * @fw_image: Firmware image name for normal booting (required)
3660+ * @fw_image: Firmware image name for normal booting (optional)
3661 * @edl_image: Firmware image name for emergency download mode (optional)
3662 * @rddm_size: RAM dump size that host should allocate for debugging purpose
3663 * @sbl_size: SBL image size downloaded through BHIe (optional)
3664@@ -347,12 +349,13 @@ struct mhi_controller_config {
3665 * @unmap_single: CB function to destroy TRE buffer
3666 * @read_reg: Read a MHI register via the physical link (required)
3667 * @write_reg: Write a MHI register via the physical link (required)
3668+ * @reset: Controller specific reset function (optional)
3669 * @buffer_len: Bounce buffer length
3670 * @index: Index of the MHI controller instance
3671 * @bounce_buf: Use of bounce buffer
3672 * @fbc_download: MHI host needs to do complete image transfer (optional)
3673- * @pre_init: MHI host needs to do pre-initialization before power up
3674 * @wake_set: Device wakeup set flag
3675+ * @irq_flags: irq flags passed to request_irq (optional)
3676 *
3677 * Fields marked as (required) need to be populated by the controller driver
3678 * before calling mhi_register_controller(). For the fields marked as (optional)
3679@@ -437,13 +440,14 @@ struct mhi_controller {
3680 u32 *out);
3681 void (*write_reg)(struct mhi_controller *mhi_cntrl, void __iomem *addr,
3682 u32 val);
3683+ void (*reset)(struct mhi_controller *mhi_cntrl);
3684
3685 size_t buffer_len;
3686 int index;
3687 bool bounce_buf;
3688 bool fbc_download;
3689- bool pre_init;
3690 bool wake_set;
3691+ unsigned long irq_flags;
3692 };
3693
3694 /**
3695@@ -599,6 +603,15 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl,
3696 void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason);
3697
3698 /**
3699+ * mhi_get_free_desc_count - Get transfer ring length
3700+ * Get # of TD available to queue buffers
3701+ * @mhi_dev: Device associated with the channels
3702+ * @dir: Direction of the channel
3703+ */
3704+int mhi_get_free_desc_count(struct mhi_device *mhi_dev,
3705+ enum dma_data_direction dir);
3706+
3707+/**
3708 * mhi_prepare_for_power_up - Do pre-initialization before power up.
3709 * This is optional, call this before power up if
3710 * the controller does not want bus framework to
3711@@ -673,6 +686,13 @@ enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
3712 enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl);
3713
3714 /**
3715+ * mhi_soc_reset - Trigger a device reset. This can be used as a last resort
3716+ * to reset and recover a device.
3717+ * @mhi_cntrl: MHI controller
3718+ */
3719+void mhi_soc_reset(struct mhi_controller *mhi_cntrl);
3720+
3721+/**
3722 * mhi_device_get - Disable device low power mode
3723 * @mhi_dev: Device associated with the channel
3724 */
3725@@ -692,13 +712,27 @@ int mhi_device_get_sync(struct mhi_device *mhi_dev);
3726 void mhi_device_put(struct mhi_device *mhi_dev);
3727
3728 /**
3729- * mhi_prepare_for_transfer - Setup channel for data transfer
3730+ * mhi_prepare_for_transfer - Setup UL and DL channels for data transfer.
3731+ * Allocate and initialize the channel context and
3732+ * also issue the START channel command to both
3733+ * channels. Channels can be started only if both
3734+ * host and device execution environments match and
3735+ * channels are in a DISABLED state.
3736 * @mhi_dev: Device associated with the channels
3737 */
3738 int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
3739
3740 /**
3741- * mhi_unprepare_from_transfer - Unprepare the channels
3742+ * mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer.
3743+ * Issue the RESET channel command and let the
3744+ * device clean-up the context so no incoming
3745+ * transfers are seen on the host. Free memory
3746+ * associated with the context on host. If device
3747+ * is unresponsive, only perform a host side
3748+ * clean-up. Channels can be reset only if both
3749+ * host and device execution environments match
3750+ * and channels are in an ENABLED, STOPPED or
3751+ * SUSPENDED state.
3752 * @mhi_dev: Device associated with the channels
3753 */
3754 void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);
3755diff --git a/include/linux/wwan.h b/include/linux/wwan.h
3756new file mode 100644
3757index 0000000..aa05a25
3758--- /dev/null
3759+++ b/include/linux/wwan.h
3760@@ -0,0 +1,111 @@
3761+/* SPDX-License-Identifier: GPL-2.0-only */
3762+/* Copyright (c) 2021, Linaro Ltd <loic.poulain@linaro.org> */
3763+
3764+#ifndef __WWAN_H
3765+#define __WWAN_H
3766+
3767+#include <linux/device.h>
3768+#include <linux/kernel.h>
3769+#include <linux/skbuff.h>
3770+
3771+/**
3772+ * enum wwan_port_type - WWAN port types
3773+ * @WWAN_PORT_AT: AT commands
3774+ * @WWAN_PORT_MBIM: Mobile Broadband Interface Model control
3775+ * @WWAN_PORT_QMI: Qcom modem/MSM interface for modem control
3776+ * @WWAN_PORT_QCDM: Qcom Modem diagnostic interface
3777+ * @WWAN_PORT_FIREHOSE: XML based command protocol
3778+ * @WWAN_PORT_MAX: Number of supported port types
3779+ */
3780+enum wwan_port_type {
3781+ WWAN_PORT_AT,
3782+ WWAN_PORT_MBIM,
3783+ WWAN_PORT_QMI,
3784+ WWAN_PORT_QCDM,
3785+ WWAN_PORT_FIREHOSE,
3786+ WWAN_PORT_MAX,
3787+};
3788+
3789+struct wwan_port;
3790+
3791+/** struct wwan_port_ops - The WWAN port operations
3792+ * @start: The routine for starting the WWAN port device.
3793+ * @stop: The routine for stopping the WWAN port device.
3794+ * @tx: The routine that sends WWAN port protocol data to the device.
3795+ *
3796+ * The wwan_port_ops structure contains a list of low-level operations
3797+ * that control a WWAN port device. All functions are mandatory.
3798+ */
3799+struct wwan_port_ops {
3800+ int (*start)(struct wwan_port *port);
3801+ void (*stop)(struct wwan_port *port);
3802+ int (*tx)(struct wwan_port *port, struct sk_buff *skb);
3803+};
3804+
3805+/**
3806+ * wwan_create_port - Add a new WWAN port
3807+ * @parent: Device to use as parent and shared by all WWAN ports
3808+ * @type: WWAN port type
3809+ * @ops: WWAN port operations
3810+ * @drvdata: Pointer to caller driver data
3811+ *
3812+ * Allocate and register a new WWAN port. The port will be automatically exposed
3813+ * to user as a character device and attached to the right virtual WWAN device,
3814+ * based on the parent pointer. The parent pointer is the device shared by all
3815+ * components of a same WWAN modem (e.g. USB dev, PCI dev, MHI controller...).
3816+ *
3817+ * drvdata will be placed in the WWAN port device driver data and can be
3818+ * retrieved with wwan_port_get_drvdata().
3819+ *
3820+ * This function must be balanced with a call to wwan_remove_port().
3821+ *
3822+ * Returns a valid pointer to wwan_port on success or PTR_ERR on failure
3823+ */
3824+struct wwan_port *wwan_create_port(struct device *parent,
3825+ enum wwan_port_type type,
3826+ const struct wwan_port_ops *ops,
3827+ void *drvdata);
3828+
3829+/**
3830+ * wwan_remove_port - Remove a WWAN port
3831+ * @port: WWAN port to remove
3832+ *
3833+ * Remove a previously created port.
3834+ */
3835+void wwan_remove_port(struct wwan_port *port);
3836+
3837+/**
3838+ * wwan_port_rx - Receive data from the WWAN port
3839+ * @port: WWAN port for which data is received
3840+ * @skb: Pointer to the rx buffer
3841+ *
3842+ * A port driver calls this function upon data reception (MBIM, AT...).
3843+ */
3844+void wwan_port_rx(struct wwan_port *port, struct sk_buff *skb);
3845+
3846+/**
3847+ * wwan_port_txoff - Stop TX on WWAN port
3848+ * @port: WWAN port for which TX must be stopped
3849+ *
3850+ * Used for TX flow control, a port driver calls this function to indicate TX
3851+ * is temporary unavailable (e.g. due to ring buffer fullness).
3852+ */
3853+void wwan_port_txoff(struct wwan_port *port);
3854+
3855+
3856+/**
3857+ * wwan_port_txon - Restart TX on WWAN port
3858+ * @port: WWAN port for which TX must be restarted
3859+ *
3860+ * Used for TX flow control, a port driver calls this function to indicate TX
3861+ * is available again.
3862+ */
3863+void wwan_port_txon(struct wwan_port *port);
3864+
3865+/**
3866+ * wwan_port_get_drvdata - Retrieve driver data from a WWAN port
3867+ * @port: Related WWAN port
3868+ */
3869+void *wwan_port_get_drvdata(struct wwan_port *port);
3870+
3871+#endif /* __WWAN_H */

Subscribers

People subscribed via source and target branches

to all changes: